Full Colour PDF 30 mb ◊ White pages print version
Chris King
CC BY-NC-ND
4.0 doi:10.13140/RG.2.2.32891.23846
Part 2 Conscious Cosmos ◊ Update 5-8-2021◊ 4-2023
Contents Summary - Contents in Full
Symbiotic Existential Cosmology:
Scientific Overview – Discovery and Philosophy
Biocrisis, Resplendence and Planetary Reflowering
Psychedelics in the Brain and Mind, Therapy and Quantum Change, The Devil's Keyboard
Fractal, Panpsychic and Symbiotic Cosmologies, Cosmological Symbiosis
Quantum Reality and the Conscious Brain
The Cosmological Problem of Consciousness in the Quantum Universe
The Physical Viewpoint, The Neuroscience Perspective
The Evolutionary Landscape of Symbiotic Existential Cosmology
Evolutionary Origins of Conscious Experience
Science, Religion and Gene Culture Co-evolution
Animistic, Eastern and Western Traditions and Entheogenic Use
Natty Dread and Planetary Redemption
Yeshua’s Tragic Mission, Revelation and Cosmic Annihilation
Ecocrisis, Sexual Reunion and the Entheogenic Traditions
Communique to the World To save the diversity of life from mass extinction
The Vision Quest to Discover Symbiotic Existential Cosmology
The Evolution of Symbiotic Existential Cosmology
Appendix:Primal Foundations of Subjectivity, Varieties of Panpsychic Philosophy
Consciousness is eternal, life is immortal.
Incarnate existence is Paradise on the Cosmic equator
in space-time – the living consummation of all worlds.
But mortally coiled! As transient as the winds of fate!
Symbiotic Existential Cosmology – Contents in Full
The Existential Condition and the Physical Universe
Discovering Life, the Universe and Everything
The Central Enigma: What IS the Conscious Mind?, Glossary
Biocrisis and Resplendence: Planetary Reflowering
The Full Scope: Climate Crisis, Mass Extinction. Population and Nuclear Holocaust
Psychedelics in the Brain and Mind
Therapy and Quantum Change: The Results from Scientific Studies
Biocosmology, Panpsychism and Symbiotic Cosmology
Darwinian Cosmological Panpsychism
Symbiosis and its Cosmological Significance
Quantum Reality and the Conscious Brain
The Cosmological Problem of Consciousness
The Physical Viewpoint, Quantum Transactions
The Neuroscience Perspective, Field Theories of Consciousness
Conscious Mind, Resonant Brain
Cartesian Theatres and Virtual Machines
Global Neuronal Workspace, Epiphenomenalism & Free Will
Consciousness and Surviving in the Wild
Consciousness as Integrated Information
Is Consciousness just Free Energy on Markov Landscapes?
Can Teleological Thermodynamics Solve the Hard Problem?, Quasi-particle Materialism
The Crack between Subjective Consciousness and Objective Brain Function
A Cosmological Comparison with Chalmers’ Conscious Mind
Minimalist Physicalism and Scale Free Consciousness
Defence of the real world from the Case Against Reality
Consciousness and the Quantum: Putting it all Back Together
How the Mind and Brain Influence One Another
The Diverse States of Subjective Consciousness
Consciousness as a Quantum Climax
TOEs, Space-time, Timelessness and Conscious Agency
Psychedelics and the Fermi Paradox
The Evolutionary Landscape of Symbiotic Existential Cosmology
Evolutionary Origins of Neuronal Excitability, Neurotransmitters, Brains and Conscious Experience
The Extended Evolutionary Synthesis, Deep and dreaming sleep
The Evolving Human Genotype: Developmental Evolution and Viral Symbiosis
The Evolving Human Phenotype: Sexual and Brain Evolution, the Heritage of Sexual Love and Patriarchal Dominion
Niche Construction, Habitat Destruction and the Anthropocene
Democratic Capitalism, Commerce and Company Law
Science, Religion and Gene-Culture Coevolution, The Spiritual Brain, Creationism
The Noosphere, Symbiosis and the Omega Point
Animism, Religion, Sacrament and Cosmology
Is Polyphasic Consciousness Necessary for Global Survival?
The Grim Ecological Reckoning of History
Anthropological Assumptions and Coexistential Realities
Shipibo: Split Creations and World Trees
Meso-American Animism and the Huichol
Pygmy Cultures and Animistic Forest Symbiosis
San Bushmen as Founding Animists
The Key to Our Future Buried in the Past
Entasis and Ecstasis: Complementarity between Shamanistic and Meditative Approaches to Illumination
Eastern Spiritual Cosmologies and Psychotropic Use
Psychedelic Agents in Indigenous American Cultures
Natty Dread and Planetary Redemption
The Women of Galilee and the Daughters of Jerusalem
Descent into Hades and Harrowing Hell
Balaam the Lame: Talmudic Entries
Soma and Sangre: No Redemption without Blood
The False Dawn of the Prophesied Kingdom
Transcending the Bacchae: Revelation and Cosmic Annihilation
Ecocrisis, Sexual Reunion and the Tree of Life
Biocrisis and the Patriarchal Imperative
The Origins and Redemption of Religion in the Weltanshauung
A Millennial World Vigil for the Tree of Life
Redemption of Soma and Sangre in the Sap and the Dew
Maria Sabina’s Holy Table and Gordon Wasson’s Pentecost
Santo Daime and the Union Vegetale
The Society of Friends and Non-sacramental Mystical Experience
The Vision Quest to Discover Symbiotic Existential Cosmology
Scepticism, Belief and Consciousness
Psychedelics – The Edge of Chaos Climax of Consciousness
Discovering Cosmological Symbiosis
Evolution of Symbiotic Existential Cosmology
Communique on Preserving the Diversity of Life on Earth for our Survival as a Species
Affirmations: How to Reflower the Diversity of Life for our own Survival
Epilogue
Symbiotic Existential Cosmology is Pandora's Pithos Reopened and Shekhinah's Sparks Returning
The Weltanshauung of Immortality
Paradoxical Asymmetric Complementarity, The Natural Face of Samadhi vs Male Spiritual Purity, Clarifying Cosmic Karma
Empiricism, the Scientific Method, Spirituality and the Subjective Pursuit of Knowledge
Appendix Primal Foundations of Subjectivity, Varieties of Panpsychic Philosophy
8 Fractal Biocosmology, Cosmological Panpsychism and Symbiotic Cosmology
Fractal Biocosmology
Fractal biocosmology (King 2020a) is an indisputable empirical feature of the universe, with only one partially unresolved link, in the biogenesis pathway from organic molecules found in galactic gas and dust clouds to the first evidence for life on Earth in rock formations some 3.6 billion years ago, shortly after the oceans formed. Recent research has however filled in many of the gaps in this account, so that there is a high degree of confidence that this stage is also cosmological in nature.
Fig 51 : Fractal biocosmology synopsis (See text below for discussion).
The physical universe and its laws hinge on investigations at two extremes, unified field theories of the fundamental forces at the quantum level and the evolution of the universe as a whole at the cosmological scale. At the quantum level, matter ends up being composed of multi-layered quantum structures, in which the strongest forces form interactive bonds first, and the rest follow in sequence of relative strength. The most complex of these quantum structures are atoms and molecules on the planetary surface, where all the forces come into structural interaction.
On the cosmic scale, following primal symmetry breaking, the universe forms a fractal structure of clusters of galaxies shaped by dark matter gravitational mass, called the cosmic web, illustrated above for the local Laneakea supercluster. Galaxies form, containing billions of stars, generally with massive black holes at their cores, as illustrated by Centaurus A above. Supernovae and colliding neutron stars end up generating the 100 or so atomic nuclei, seeding later smaller long-lived stars with the mineral elements. Among these galaxies are nebulae, consisting of gas clouds, in which star and planetary formation are taking place, such as the Orion Nebula above. These also contain gas clouds containing molecular precursors of life, from HCN and HCHO above to amino acids, seeding the biogenesis of evolutionary life. Radio-telescope data as early as 1974 (Buhl) demonstrated clouds of multiple-bonded HC≡N and H2C=O spanning the region in the Orion nebula where several new stars are forming, fig 51. More recently, surveys from Hershel have produced high resolution maps of the distributions of HCN, and ionised H2CO and NH3 in both the Perseus and Serpens molecular clouds (Storm et al. 2014). These molecules form a core of primitive prebiotic syntheses in the laboratory, because multiple –C≡C–, –C≡N and >C=O bonds are some of the strongest covalent couplings in the universe, but are unstable to their higher energy π orbitals opening to make heterocyclic molecules, resulting in key prebiotic polymerisations. The nucleotide base adenine for example is (HCN)5 and is produced this way on industrial scales.
Fig 52: Polymeric pathways from HCN lead to both amino acids and nucleotide bases such as adenine (top left). .
The four core forces of nature in the standard model plus conventional gravity, are symmetry-broken. The colour force, couples in triplets while the weak-electromagnetic couples in pairs of opposite charge. Their charge interactions display inverse quadratic non-linearities that make atomic and molecular matter have similar characteristics to Julia and Mandelbrot set fractals in terms of their quantum dynamics. Gravity and electromagnetism both display inverse quadratic non-linearities, leading to chaotic dynamic regimes, both on solar system scales and in molecular interaction. Biological itssue is the ultimate expression of this quantum fractality in the universe as shown in fig 51. The abiogenesis of replicative life is a fractal molecular phase transition from a more diverse chaotic far-from-equilibrium milieu to the more ordered system of biomolecules in living systems, such as nucleic acids and polypeptides, which have evolved through subsequent functional refinement. This is a classic example of a transition toward the edge of chaos in living evolution from a more chaotic regime.
As shown in fig 51, the colour force gluons bind quarks in triplets of three colours and two base weak force flavours, forming protons and neutrons. These in turn become bound together by the strong nuclear force, a secondary effect of the colour force like the van der Waal’s force in chemistry. Electromagnetism and the weak force have broken symmetry with one another. The equivalent of the electromagnetic photon, the X±, and Z particles are both charged and inherit a large mass from coupling to the Higgs particle, discovered by the LHC. This means the weak force is very short range and operates primarily in the nucleus, exchanging the identities of neutrons and protons to minimise the energy in atomic nuclei and mediate the electromagnetic repulsion between positively charged protons. This in turn generates the diverse table of the atomic nuclei, the chemical elements and interactive molecular structures. A planetary surface, with a free energy source of incident solar radiation, held together by gravitation, sets the context for the negentropic quantum structural explosion we call replicative life.
Life is present so far as we know only on planetary surfaces at much lower energies than the nuclear reactions of stars, where there is an incoming source of free energy in solar radiation, complemented by chemical gradients in the chaotic planetary environment, so it would appear that life can have little consequential affect on the evolution and fate of the universe as a whole. Traditional cosmology thus treats life as a phenomenon superfluous to the universe at large because the energies and forces of, not just of the big bang, or giant black holes at galactic centres, but even a small star like the Sun, are on a scale which would fry life to a crisp. However this picture, based on relative energy strengths, fails to appreciate the full scope of the interactive process set off by the cosmological symmetry-breaking of the forces of nature.
The chemical elements form a complex sequence of quantum structures, with orbital electrons captured by the positive charge of the nucleus, having non-linear energetics driven by the inverse quadratic electromagnetic force. The table of the elements is periodic in terms of the chain of s, p, d and f orbitals of successively higher spin and energy, according to linear Schrödinger equations and their ensuing σ and π molecular orbitals through electron sharing, but it is non-linear in terms of charge effects between negatively charged electrons and the positively charged nucleus. This means that the periodic table is not just periodic but a non-linear spiral of properties in which, for example O and S, N and P, and C and Si, each have quite distinct properties, although having the same outer orbital occupancy.
The bioelements thus form a san-graal, or more correctly sang réal, “royal blood" relationship with the chemical elements as a whole as a core re-entrant interactive manifestation of cosmic symmetry-breaking, in which the strongest covalent first row elements CNO coupled with H form the core. The circular table of elements of life in fig 51 shows that these form symmetry-broken quantum interference arrangement, centred on optimally covalent H-CNO as backbone-building elements complemented by ionic pairs K+/Na+, Ca++/Mg++ and Cl,- second row P, S adding additional pathways through polyphosphates and -S–S- bonds, then extended by transition elements such as Zn, Cu, Co, Fe, Mn through to Mo as electronic catalysts. This gives a central cosmological status to life as the final interactive product of cosmic symmetry-breaking of the colour, weak and electromagnetic forces in the standard model of physics.
The orbital electrons are able to enter into a graduated series of chemical bonding structures, from strong covalent and ionic bonds, to weaker so-called hydrogen bonds and van der Waal’s forces. Due to the non-linear energetics of cooperative weak bonding, interactively in a negentropic environment, these become able to generate fractal quantum structures extending up to the macroscopic scale of organisms, as illustrated by serotonin, the protein EGF and the ribosome complex above – the factory to make proteins instructed by DNA, composed of RNAs and a number of ancillary proteins. On a larger fractal scale again, these form sub-cellular organelles, such as the membrane, as shown above and the endoplasmic reticulum. We then reach the scale of the single cell, illustrated in fig 51 for a kidney cell of a green monkey. Finally, we reach tissues in multi-celled organisms such as the olfactory bulb of the mouse above (Sakaguchi et al. 2018), then organs, illustrated above by the conscious human brain, the whole organism and the planetary biosphere.
Fig 53: (King 2020a) Components of the link from organics in the universe to the
origin of life on Earth. Top left: Solar system planets and moons of Jupiter
and Saturn show divergent molecular compositions and chaotic surface phenomena,
forming an open set of primordial conditions. Left: Murchison carbonaceous
chondrite (inset), major amino acids and sugar components, and the sheer
diversity of organic products. Centre left: Lost city vents formed by a
chemical garden reaction between basic olivine (lower) and acidic sea water
with dissolved CO2 . Resulting H2 and CO can drive the
formation of organics including C1-4 hydrocarbons. These can be concentrated to
biological levels (lower). Upper right: One pot synthesis of pyrimidine nucleotides (green), compared with the unsuccessful conceptual synthesis from ribose and pyrimidine subunits (blue) (Powner et al. 2009). Lower right: One pot synthesis of pyrimidine
ribonucleotides and purine deoxyribonucleotides (Xu et al. 2020).
Along with 15 amino-acids, all the nucleotide bases A, U, G, and C have been detected in carbonaceous chondrites, such as the Murchison and Tagish Lake meteorites (Glavin et al. 2012), carbonaceous chondrites containing primitive material from the Solar System's origin chemically altered by water during time on asteroidal bodies, before falling to Earth (Callahan et al. 2011, Oba et al. 2022). These also contain amphophilic membrane forming products. Alanine has been found to have a chiral excess of the L-enantiomer and L-excesses were also found in isovaline, suggesting an extraterrestrial source for molecular asymmetry in the Solar System. Ribose has been found in carbonaceous chondrites. The ribose in the Murchison has C13 levels 43% higher than terrestrial, confirming an extraterrestrial origin (Furukawa et al. 2020). Measured purine and pyrimidine compounds, including guanine, cytosine, adenine, thymine and uracil and others such as xanthine are indigenous components of the Murchison meteorite. The Murchison also contains phylosilicates and olivine. Silicon carbide crystals in it date from as far back as 7500 billion years ago, nearly twice the age of the sun and solar system (Heck et al. 2020). Comets have likewise been shown to have primordial solar system organics, explaining how life appeared rapidly on the early earth after a period of heavy cometary and meteorite bombardment. The amino acids found have also been synthesised in laboratory experiments by the action of electric discharge on a mixture of methane, nitrogen, and water with traces of ammonia (Kvenvolden et al. 1972). A complex mixture of alkanes was isolated as well, similar to the Miller-Urey experiment.
A vast array of prebiotic molecules have been detected by mass spectrometry in the Murchison, of higher diversity than the present biosphere (Schmitt-Kopplin et al. 2009):
Here we demonstrate that a nontargeted ultra-high-resolution molecular analysis of the solvent-accessible organic fraction of Murchison extracted under mild conditions allows one to extend its indigenous chemical diversity to tens of thousands of different molecular compositions and likely millions of diverse structures. This molecular complexity, which provides hints on heteroatoms chronological assembly, suggests that the extraterrestrial chemodiversity is high compared to terrestrial relevant biological- and biogeochemical-driven chemical space.
This amounts to a clear manifestation of molecular pan-fecundity, a chaotic diversity of oligomeric fractal molecules with close to the ergodic maximum variety possible. These form an optimally fecund mix for subsequent dynamics on the planetary surface, balancing solar energy input against radiative degradation. This creates a far from equilibrium dynamic tending to minimum entropy production (Prigogine 1984, Klein & Meijer 1954), in which each type of molecule has a potentially catalytic influence through its orbital attractions, at the same time participating in creation annihilation events when bonds are formed and broken. But this isn't any kind of genetic panspermia, it is panfecundity of chaotic molecular diversity characteristic of the CNO-H + S component of the light elements on a rocky 'goldilocks planet' transitioning down to the temperature range of liquid H2O. The process approaches chaotic fecundity because this optimally covalent molecular sector of the table of the elements, starts out with diversity and has no strong dynamic attractor to form predominant species, given the founding diversity and the opposing influences of anabolism and catabolism, so we end up with a quantum energy landscape with few strong attractors and ergodic diversity. Biogenesis then becomes a transition from deep molecular chaos towards the edge of chaos in which auto-catalytic populations emerge, and as we come closer to the edge, universal computation as in cellular automata ensues, eventually arriving at nucleotide replication and translation , leading to the reduced set of molecular varieties we find in metabolic biochemistry, RNA, DNA, proteins and lipids.This process is akin to an ergodic version of adiabatic quantum computation, and the autocatalytic features constitute a selective process, affecting future survival of the species involved, whose catalytic potential is its "genetic" signature, forming a primitive evolutionary process in the collective auto-catalytic system, as a prebiotic progenote.
Carl Woese (1998) coined this term for the kind of pseudo-organism that hypothetically existed prior to the universal common ancestor of all life LUCA. He described a process of genetic annealing, similar to neural net annealing on a potential energy landscape, at the point RNA replication and protein translation had both become established:
First consider the analogy: a physical annealing system starts at a high enough temperature that structures cannot form and then proceeds to slowly cool. In this quasi-stable condition, various combinations of the system's elements form, dissociate, and reform in new ways, with only the most stable and structured of these combinations initially persisting, i.e., 'crystallizing.' As the temperature continues to drop, less stable structures begin to form, to crystallize, and many of the preexisting ones add new features, becoming more elaborate. In the evolutionary counterpart of physical annealing, the elements of the system are primitive cells, mobile genetic elements, and so on, and physical temperature becomes 'evolutionary temperature,' the evolutionary tempo. The evolutionary analog of 'crystallization' is emergence of new structures, new cellular subsystems that are refractory to major evolutionary change.
These considerations apply equally to the prebiotic transition before genetic replication and translation, when the catalysts were even weaker than early genetic systems only able to translate small proteins inaccurately and replicating somewhat erratically at lower fidelity. Therefore the entire autocatalytic population is required to maintain survival and evolution.
Lost city vents are formed by a chemical garden reaction between basic olivine and acidic sea water with dissolved CO2 . Olivine is cosmologically abundant on asteroids, Earth and the Moon and was far more abundant on the early Earth. Resulting H2 and CO can drive the formation of organics including C1-4 hydrocarbons. Lost-city vents have been found to form carbonate columns with pores which have been demonstrated to be able to concentrate organics and in particular nucleotide molecules by a factor of over 1000 (Baaske et al. 2007), bringing them up to molar concentrations where a reactive metabolism and informational replication could be sustained. This provides a prospective nursery environment for life to emerge as a far from equilibrium complex dissipative systems becoming a cooperative progenote of replicating nucleotide and polypeptide molecules, with cell membranes and the genetic code arising later as an evolutionary product (fig 59).
The “hard problem” of the critical step to replication has all but been solved in laboratory one pot reactions, fig 53, both capable of generating nucleotides from primitive precursors (Powner et al. 2009, Patel et al. 2015, Stairs et al. 2017) and a pyrimidine ribo-nucleotide and purine deoxyribo-nucleotide alphabet in the same pot (Xu et al. 2020). The key to understanding this is that all living systems are thermodynamically unstable dissipative systems depending on external sources of energy to maintain their polymeric stability, or the processes would immediately run to equilibrium in a terminal polymer. This is exemplified by the ATP-MG complex in fig 54, which is a single adenine nucleotide monomer with triple phosphates forming the principal energy currency of all metabolism demonstrating the exothermic nature of the polyphosphate bond that is also the linking bond in oligonucleotides.
This means that there is no stable polymerisation route to the first oligonucleotides, so that laboratory simulations for example of spontaneous RNA polymerisation cannot be performed without the complex substrate of prebiotic molecules that enabled this process, which necessarily take long lifetimes to explore the “topologically open” set of initial conditions that make this possible, although on evolutionary time scales they were short, as there is now evidence for life 3.45 bya ago, only around 500 mya after the oceans condensed (Schopf et al. 2002, 2017, Dodd et al. 2017). In every case where the route has proved difficult, from synthesising polynucleotides (fig 53) to RNA driven protein synthesis (fig 54), alternative pathways are subsequently discovered, that demonstrate the preconceived difficulties are due to failing to think outside the box, for example by assuming the sugars and bases have to be formed and then joined as whole units as fig 53 shows is fallacious.
Fig 54: The ribosome (lower left) involving both two large rRNAs, (pink) with chaperone proteins (purple) the left one holding the mRNA and the right one building the nascent protein, along with small t-RNAs (upper left) specific for each amino acid which have to be coupled to their amino-acids by protein synthetases (Carter & Wills 2018). (Centre) A primitive precursor (Müller et al. 2022) to the dependence of RNA-based protein on the ribosome. Modified bases, still found on transfer RNAs (top centre) have been found to be able to induce formation of oligomeric chains of peptides (lower centre) by a cyclic process involving base pair binding between small RNAs carrying these modified bases (top right), confirmed by the chromatography rests (lower centre). (Right) Proto-ribosome, consisting of RNA molecules comprising only of some 120 nucleotides, about 60 for each of its two semi-symmetrical components, which accounts for less than 5 percent of the modern ribosome's dimensions: some 4,500 nucleotides in bacteria and nearly 6,000 in humans. This forms a pocket able to join amino acids together. "Peptide bond formation is the most vital activity in any cell, and we've shown that it can take place within a protoribosome," (Bose et al. 2022).
Müller et al. (2022) fig 54, provide a third counter-intuitive ground-breaking discovery, showing that the modified RNA bases still present in transfer RNAs, when two RNAs hydrogen bond together can promote the polymerisation of amino-acids, providing a precursor route to the translation apparatus that later emerged in the ribosome. These modified bases, attached to short RNAs like t-RNAs can co-synthesise peptide chains, forming diverse hybrid RNA-peptides with all kinds of catalytic functions no one had thought of. Here we have a demonstration of a completely new insight which has already been experimentally confirmed in the liquid chromatography assays in the figure and is thus feasible with the modified bases still existing in t-RNAs, which is surprising in itself. Originally there would have been more. Confining RNAs to be pristine and proteins to be pristine is a misconception that has arisen because genetically coded translation later refined it to be like that. The underlying process is highly fecund as we should have expected, because amino acids are cosmologically abundant more so than nucleotide bases and have much more diverse catalytic diversity, so hybrid RNA-polypeptide molecules have more catalytic and complexity capacity to bootstrap abiogenesis. Bose et al. (2022), fig 54 have taken this a step further, discovering a semi-symmetrical proto-ribosome capable of forming peptides with only 5% of the ribosomal rRNA. These discoveries in one fell swoop demolish any irreducible complexity argument for the ribosome, transforming the paradigm of abiogenesis.
The climax of the cosmological interactive sequence emerging from the symmetry-breaking of the fundamental forces is conscious life. The brains of higher mammals and birds are the most complex coherent quantum structures we know of in the universe and thus, in terms of the cosmological interaction sequence, form their ultimate consummation. This structurally inverts the Copernican principle that humanity does not have a privileged view of the universe. Not only does it have a privileged view because we are conscious, both of ourselves and of the universe as a whole, but because we are its ultimate structural and dynamic expression, arising from the cosmic origin.
A
major concern is that of the Fermi paradox –
the lack of astronomical evidence for extraterrestrial life. One critical
hypothesis is not that intelligent life is unlikely, but that its probability
for self-destruction destabilises the evolutionary paradigm through cultural
misadventure, just as we are seeing with human-induced climate and biodiversity
crisis.
Darwinian Cosmological Panpsychism
Darwinian panpsychic cosmology provides a description in which the subjective aspect is an integral complement to the objective physical universe, encapsulated in a series of evolutionary forms in: (a) individual quanta, (b) critically unstable multi-quantum dynamical systems including (c) living cells, (d) in sentient form in eucaryotes (e) in organismic form in multi-celled organisms (f) in the evolving biosphere and (g) collectively in the universe. This is basically an evolutionary classification with edge-of-chaos phenomena and quanta linked by the butterfly effect. It replicates the results of standard quantum mechanics and of molecular biology, except in so far as idiosyncratic outcomes of quantum uncertainty, associated with collapse of the wave function are concerned, where the subjective aspect has functionality without disrupting physical causality.
it’s perfectly reasonable to say that “the weather has a mind of its own”; it just happens to be a mind
whose details and “purposes” aren’t aligned with our existing human experience (Stephen Wolfram 2021).
Fig 55: Darwinian panpsychic cosmology synopsis
The cosmology thus replaces irreducible randomness of the Copenhagen interpretation of quantum mechanics with pan-psychic collapse generated through space-time quantum entanglement as input expressed in individual quantum events as output. Because irreducible randomness leaves the individual quantum free to manifest at any location in its equi-probable space normalised by the wave function, as consistent with the pilot wave interpretation’s replication of standard quantum mechanics, no other inconsistencies arise.
Because quanta may be also able to act under certain circumstances as interactive
panpsychic “observers”, the universe is able to collapse its own ramifying wave functions
with human observer collapse just being a special case acting on unstable brain
states, the multiverse becomes a real universe with an ongoing history as we
perceive it. This picture is one in which new probability branches are being created in the wave function by superposition in a similar manner to fractal cosmic inflation (Linde 1986, Hawking & Hertog 2018) while others are being collapsed by conscious measurement, resulting in dynamic evolution of the cosmic wave function. Special relativity, the most classical part of quantum reality, is
implicitly retrocausal as well as causal, as in Feynman diagrams, so
quantum reality is implicitly anticipatory, involving transactional collapse across relativistic space-time in which a network of potential transactions become one or a set of real emitter-absorber interactions.
Fig 56: Fractal Cosmic Inflation leaves behind symmetry-broken universes.
These systems all inherit the capacity to avoid approach to classical macroscopic, limit as they are processes which are not IID systems generated by independent and identically distributed measurements (Gallego & Dakić (2021). Similarly, in the approach of stochastic electrodynamics (SED) (de la Peña et al. 2020), in which the stochastic aspect corresponds to the effects of the collapse process towards the classical limit [42], consciousness has been proposed to be is represented by the zero point field (ZPF) (Keppler 2018).
Fig 57: (1) The quantum
stadium illustrates suppression of chaos in closed quantum systems. Top:
Experimental realisation of scarring of the wave function around wave
eigenfunctions, biasing the probabilities around unstable classical repelling
orbits. Mid: Cellular automata simulation (King 2013). Bottom: Classical
chaotic ergodic trajectory. (2) While the classical kicked top (above) shows
similar regions of chaos to (1) in the Poincare map sections (top), the quantum
kicked top (middle and bottom) shows chaos inducing entanglement with nuclear
spin (Chaudhury et al.
2012). Entanglement between the electron and nuclear spins is quantified by the
linear entropy, of the electron reduced density operator. (3) Weak quantum
measurement demonstrating Bohmian trajectories (Kocsis et al. 2011). (4)
Experiment confirming the existence of surreal Bohmian trajectories (Mahler et
al. 2016). Conceptual diagram of the result of reading out the which way
measurement (WWM) in a
double-slit apparatus in the near field (A) and in the midfield (B). Color
indicates the slit of origin of a Bohmian trajectory, and vertical position
indicates the result of the WWM. This surreal behaviour is the flip side of the
demonstrated nonlocality. This nonlocality is due to the entanglement of the
two photons, which, in Bohmian mechanics, makes their evolution inseparable
even when the photons themselves are separated. Because entanglement is
necessary for the delayed measurement scenario, this nonlocal behaviour is to
be expected and is the reason for the surreal behaviour.
The Darwinian panpsychic description deals with seven broad evolutionary classes: (1) individual quanta (2) edge of chaos physical phenomena (3) excitable cells (4) eucaryote cells with informationally sentient membranes (5) organisms (6) evolving biospheres and (7) the universe as a whole. As noted, panpsychic quanta possess both consciousness through the space-time interactivity of the wave function and free will as a result if individual uncertainty, which Conway & Kochen (2009) show umplies conscious free will.
In this description processes (1) – (3) consist of primitive subjectivity, while there is a discrete transition to sentient consciousness in (4) with the sequestering of respiration in the mitochondria with the eucaryote endosymbiosis, freeing the cell membrane for excitable sensitivity and social informational signalling via the molecules such as serotonin which evolved to become the principal neurotransmitters in the brain. This in philosophical terms, primitive subjectivity which philosophers call phenomenal consciousness is universally panpsychic and but the transitive structural details of subjective consciousness is emergent in a discrete transition with the eucaryotes, culminating in organismic consciousness through the constraints of neurodynamics.
The eucaryote endo-symbiosis – a discrete topological transition of re-entry of the two founding life forms – archaea and bacteria – is the most outstanding evolutionary transition since the origin of life. It has resulted in edge-of-chaos excitability in the cell membrane, which becomes an arbitrarily sensitive sense organ due to the butterfly effect and also reaches to the quantum level invoking quantum uncertainty at unstable tipping points, thus freeing the dynamic to be causally open in a way which allows subjective conscious volition to intervene in the behavioural outcome. This is where attentive consciousness as we know it began and its principles have continued to be used by multi-celled organisms ever since in the diversity of conscious brains spanning the animal tree of life.
For this to have happened, not only the excitable brain, but its subjective conscious volition has to have retained an evolutionary advantage to the organisms possessing it in terms of anticipating immediate, or intermediate, threats to survival and opportunities to flourish. This suggests a form of anticipation which combines previous experience encoded by the brain with a form of conscious anticipation, which may utilise retrodictive aspects of quantum uncertainty.
Let’s look at this evolutionary question more closely. The fact that the founding eucaryote membrane after endo-symbiosis became a chaotically sensitive complex dynamic doesn’t mean it is just a physical dynamical system, or conscious volition never would have emerged and been elaborated in multicellular brains leading to our own conscious experience and volition. This process was a discrete topological transition freeing up the cell membrane for arbitrarily sensitive sentience and sensitive social communication via founding neurotransmitter molecules.
We are dealing with a choice between conscious attentive volition and automatic brain processing. These have to have physical differences in the organisms, so this is not the same as David Charmers’ zombies, which are conceived to be physically identical but are not conscious. it's not just a philosophical question, but an evolutionary one. If we define a “zombie” as an organism without conscious volition that is otherwise comparably similar to a conscious organism, there are two evolutionary arguments.
(1) We know the eucaryote endo-symbiosis was a major evolutionary transition almost as significant as the origin of life itself and that all the ancestral archaea that gave rise to the eucaryotes have been swept off the face of the Earth by the very success of the founding eucaryotes. In this transition, procaryote “zombie” organisms possessing primitive subjectivity, but not sentient consciousness, evolved into and were replaced by conscious eucaryotes. This is however a phase transition in which both conscious sentience and a hugely improved energy and informational metabolism resulted, so it’s hard to attribute a definable portion of the evolutionary advantage to conscious volition.
(2) We can see in the evolution from single-celled eucaryotes to complex animal brains that these same intelligent cells have used the same dynamical basis to generate diverse brains providing a highly adapted environmental perceptual and behavioural dynamic using these same principles in their neurons. These first emerged in the loosely networked nervous systems of cnidaria such as hydra and then to mollusc, arthropod and vertebrate nervous systems all using the same founding edge-of-chaos neuronal dynamics and all of which appear to be conscious, as their tendencies to REM-like dreaming sleep attest.
This brings us to the second type of evolutionary advantage displayed by the universality of consciousness in animals. The ability of conscious organisms to out-survive mutational “zombies” that reduce or eliminate the role of conscious volition. The way consciousness in amoebo-flagellates appears to work is by the membrane having edge-of-chaos excitability with a butterfly effect which is causally open because it is quantum uncertain due to the butterfly effect. Now there is nothing stopping mutations in the organism tweaking this dynamical system to reduce or eliminate the source of this causal openness, which after all has a cost in terms of uncertainty of outcomes and a cost dynamically operating in this way. These zombies could arise by a variety of mutational avenues but there is no evidence for them. Giardia lamblia and related organisms have effectively lost their mitochondria, although there are relics present, and so present an example of retrogressive evolution, although this may not directly alter cell membrane dynamics.
This argument applies to all of evolution since, so it can apply to an organism like a fruit fly which could alter it’s developmental process so its neurons retreated from edge-of-chaos and differentially evolved to have entirely computational brains. Still they don't exist so far as we can experimentally determine, except perhaps for the roundworm Cenorhabditis elegans which comes close to computational organisation developmentally. Therefore the evolutionary argument holds.
Turning back to primitive subjectivity, many natural phenomena, take the form of edge-of-chaos processes, such as wind, waterfalls, thunder and lightning storms, from turbulent mountain summits to the ocean, which from the point of view of symbiotic panpsychism are strong candidates for primitive coherent subjectivity, consistent with animistic views. However it is not a physicalist form of panpsychism, as advocated by Galen Strawson (2006). This form of panpsychism involves only root subjectivity, with brain dynamics as a boundary condition moulding how this is shaped into subjective qualia we experience, so it does not require the detailed analysis of how qualia are composed subjectively that arise in pan-protopsychist theories. Primitive subjectivity is also consistent with the complementary subjective-objective picture of quantum reality advanced by Vimal (2008, 2009) and more elaborately by Boyer (2015).
This has a view of physical complexity which differs from integrated information theory (IIT), (Tononi and Koch 2015) in that it stresses the features of having unstable but coherent subjectivity as an unstable anticipatory property rather than system complexity as in the Markov complexity parameter of IIT, which is just an abstract mathematical formulation. Unlike Goff's notion of raw cosmic consciousness, possessing only elementary properties of agency and future awareness, the universe, like the biosphere possess forms of consciousness through the collective and individual subjective awareness of its participant biota.
While closed quantum systems such as nuclear energetics and the quantum stadium display suppression of chaos by energy separation of the eigenfunctions and by scarring of the wave function, as shown (1) in fig 57, chaotic systems where there is possible coupling to other interactions, such as the quantum kicked top (2) display additional quantum entanglement in the chaotic regime, exemplified by entanglement between electronic and nuclear spins in (2). This shows quantum chaos induces deeper levels of entanglement.
In weak quantum measurement (3), a photon released from a laser-excited quantum dot can demonstrate Bohmian trajectories by making a weak measurement which slightly disturbs the wave function without inducing collapse by absorbing the particle, which can then be used over multiple trials to map out individual trajectories over different time delays by detecting the particle’s absorption. A significant aspect of this approach is that it involves a form of retrodiction, or backward causality we have seen in the Wheeler delayed choice experiment, fig 74. Bohm’s pilot wave theory involves a particle with a definite position shaped by a quantum potential derived from the wave function that carries all the other features such as spin.
Englert et al. (1992) noted that, in certain circumstances, the pilot wave theory could cause overlap of these real trajectories in such a way as to cause some trajectories in a two-slit interference experiment to behave as if they had come through the opposite slit to the one the Bohm interpretation implied, invoking a conflict with standard quantum mechanics. This caused a debate in the physics community, with opponents decrying the pilot wave interpretation. This was opposed by Basil Hiley, an original co-researcher with David Bohm. Hiley et al. (2000) stated : “We also argue that contrary to their negative view, these trajectories can provide a deeper insight into quantum processes [43].”
“This suggests that it may still be possible to retain the notion of a ‘particle’ even in the quantum domain. … Alternatively we could give a more general meaning to these curves. For example, we could imagine a deeper, more complex process, which is not localised, but extends over a region of space where the wave function is non-zero. The curve could then be interpreted as the centre of this activity as this process evolves in space”. … “In spite of these limited successes, the nature of this deeper process is still very illusive and arises essentially from the non-commutative structure of the quantum algebra.”
Mahler et al. (2016) demonstrated the actual existence of such trajectories, which are equivalent to streamlines of the probability currents in standard QM (Tastevin & Laloë F 2018), noting that their existence did not imply a conflict between the theories, but indicated instead a new level of quantum entanglement between the particles:
“This nonlocality is due to the entanglement of the two photons, which, in Bohmian mechanics, makes their evolution inseparable even when the photons themselves are separated. Because entanglement is necessary for the delayed measurement scenario of ESSW, this nonlocal behavior is to be expected and is the reason for the surreal behavior they identify. Indeed, our observation of the change in polarization of a free space photon, as a function of the time of measurement of a distant photon (along one reconstructed trajectory), is an exceptionally compelling visualization of the nonlocality inherent in any realistic interpretation of quantum mechanics”.
Taking Hiley et al. and Mahler et al. at face value, we again have a situation where Bohmian trajectories are found to be consistent with quantum mechanics, confirming their validity in this context, but that the situation invoked by the experiment demonstrates deepening entanglement in these interacting systems.
The validity of the pilot wave theory in these situations, when no issues of conflict arise with the Feynman path integral formulation means it is also legitimate to propose that the deeper entanglement is playing a role in determining the position of the particle in the wave as any position in the probability space normalised by the probability amplitude is equally legitimate under the concept of irreducible randomness. Thus the panypsychic cosmology is effectively consistent with both pilot wave and standard interpretations but suggests trajectories are derived from deep entanglement.
When a theoretical explanation of any such quantum experiment is made, the quantum equations used to describe the outcome presume the only entanglements are the ones defined by the experimental apparatus used, but the quanta in the universe at large are all carrying a host of subtle entanglements from their past and future interactions and those of the other quanta they have interacted with. The equations are thus a first order ideal that masks the deep entanglement in both the experimental situation and the world at large..
Thus while panpsychic cosmology appears to introduce an ephemeral subjective complement to the universe, the effect is to give us back the real historical universe we experience, rather than the shadow multiverse of superimposed wave functions, because the subjective aspect of quanta participate in wave function collapse. The universe is thus not stranded from manifesting historically in the absence of conscious animate observers, but these animate observers through their consciousness are nevertheless able to also collapse wave functions in contexts like Schrödinger’s cat for example those that are involved in brain function. Indeed this makes von Neumann’s comment that collapse can happen anywhere up to the point of conscious observation prescient. In fact the traditional cat paradox experiment might collapse at the cat who is/was also conscious. But there are also diverse human strategic situations where tipping points occur and small acts of idiosyncrasy can have world-changing consequences.
Critically, it restores our subjective conscious ability to apply volitional will to affect the physical world around us. As noted in the introduction, all sane people have an implicit existential awareness that we make subjectively conscious intentional decisions and apply our volitional will to produce change in the physical world. We act and feel that we are intentional agents subjectively applying our will in our decisions and our actions and can do so through our intuitions. In this sense I am defining agency as the ability of subjective conscious experience to affect the physical world through the application of consciously experienced volition, resulting in physical effects, in our behaviour and actions. As noted in the introduction, affirming the efficacy of conscious volitional will leads directly to panpsychism, because some matter (brains) can manifest subjective consciousness affecting the physical universe, but because the brain is normal matter, obeying the four core quantum forces, even though these may involve exotic quanta such as quasi-particle excitations, subjectivity is a property complementing the physics of the universe. In this sense agency is subjective conscious volition intentionally affecting physical reality, unlike the purely objective notions of Moreno & Mossio (2015), where weak and adaptive agency are just objective dynamical structures.
The brain, in which continuous wave excitations and complementary discrete phased pyramidal action potentials are forming, is a process at face value homologous to quantum observation of edge of chaos dynamics at unstable tipping points. The role of consciousness as a quantum observer of the brain’s own attention dynamics, noted in Graziano’s (2016) AST model, would enable a quasi-causal role for volitional will to avoid lethal misadventure, by filling in the uncertainty gaps in edge of chaos computation and thus validate our veridical impression of possessing autonomous will as real rather than the delusion materialists claim. Human decision-making has a similar idiosyncratic nature to single quantum events I shall call a quantum instance, just as evolution is a sequence of adventitious quantum transformations, every one of which is a single unrepeated quantum instance, none of which individually converge to the classical expectation of the probability amplitude. In this way there is a deep correspondence between human decision making, evolutionary mutation and the cosmological idiosyncrasy [44] of a single quantum instance, which is completely uncertain. Hence the free-will of the quantum is its instance and our volitional will is also an instance.
See: Primal Foundations of Subjectivity for a further explanation of this process.
We
next explore how the fractal and panpsychic cosmological pictures fit into a deeper symbiotic picture, in which
biological life, the universe and consciousness all enter into a symbiotic
relationship extending the manifest biological symbiosis that is the
basis of the endosymbiotic eucaryote cell, eucaryote sexuality, cell-virus
symbiosis and the natural and sexual selection (Darwin 1859, 1889), that is
universal in the biosphere.
Fig 58: Cosmological
symbiosis: Full scale cosmology image
Cosmological Symbiosis
Cosmological Complementarity
1. The physical universe has a veridical [45] complement – cosmological consciousness, or the “mind at large”, the subjective manifestation of the cosmos. Individual human consciousness is an encapsulated instance of the whole.
2. The mind at large shapes physical history in the quantum multiverse, through volitional will collapsing the superimposed, quantum-entangled wave functions.
Biogenesis
3. Cosmological fecundity: The physical universe is the most complex quantum fractal conceivable in space-time, due to cosmic symmetry-breaking of the four quantum forces – gravity, the weak and colour forces and electromagnetism.
4. Consequently emergent molecular action is a complex fractal quantum process, culminating the symmetry-broken interaction of the four quantum forces of nature.
5. At the same time, planetary conditions permeate the degrees of freedom for biogenesis, due to chaotic dynamics of gravitation and the other forces.
6. Consequently, due to ergodicity [46], replicative life will take root in an open subset of cosmological conditions.
Evolution
7. Computational catastrophe: With the advent of genetic evolution, molecular interaction becomes a complex massively-parallel quantum computer, accumulating adventitious information through mutation and natural selection.
8. Cellular excitability: Edge of chaos excitable cells gain a coherent encapsulated form of panpsychism, which is adaptive to survival and is thus selected for.
9. Eucaryote symbiosis between the two founding branches of life, archaea and bacteria, triggers a complexity catastrophe.
10. Cellular consciousness: Adaption to environmental modes of quantum perturbation of cell excitability in eucaryotes results in cellular sentience. This is the critical transition to existential consciousness. The transition to brains is a secondary extension.
11. Signalling molecules, such as serotonin, evolve to mediate modes of social interaction conducive to survival of the collective organism in single celled eucaryotes, also affecting epigenetics and, by selection over the result, genetics.
Organism
12. As organisms evolve to become multi-celled, cellular consciousness becomes organismic consciousness via neuronal coupling.
13. Fractal culmination in the Biota: Conscious organisms become the consummating fractal interactive expression of cosmological symmetry-breaking, running from quarks through nuclei, atoms and molecules, to molecular complexes such as the membrane and ribosome, to cell organelles, cells, tissues, organs such as the brain, societies of organisms and the symbiotic biosphere.
14. The brain’s organismic consciousness becomes evolutionarily adapted to aid the survival of the organism and the family.
15. In mammals, this involves limbic emotions, invoking a dynamic network for survival that we consciously identify with the ego.
16. At the same time, the brain, as a closely-coupled society of neuronal cells, interacting via the same signalling molecule types, remains dependent on elementary amine-based neurotransmitters, to modulate key survival strategies, because these arose from modalities directly ensuring the survival of the collective organism in single-celled species.
17. Involution: Again at the same time, given the variety of niches on an Earth-like planet, several species are likely to evolve to synthesise modified amino acid derivatives (e.g. psilocin, DMT, mescalin), capable of altering the dynamics of consciousness in such a way as to bring individual consciousness back into relationship with the mind at large using the same receptor pathways.
Biospheric, Psychic and Cosmological Symbiosis
18. All living species, including humans, survive through evolutionary niches in effective biospheric symbiosis with the whole.
19. Because Homo sapiens, the currently dominant species on Earth has evolved an ego-based form of individual consciousness, evidenced in our tribal emergence, our species is not adapted to, and thus lacks the intrinsic ability to care for the planet mindfully enough to avoid exploiting it to the extent that it becomes critically compromised, threatening human survival.
20. Entheogenic species bearing psychedelic neurotransmitter analogues, by tweaking a central brain survival mode at the receptor level, can precipitate ego dissolution, leading to moksha – reunion with the “mind at large”, thus evoking a psychic symbiosis with humanity complementary to our inter-dependence with food, medicinal, and biosphere-supporting species.
21. This psychic symbiosis enables humanity to find its role as the guardians of the living planet and the flowering of conscious existence in evolutionary and cosmic time scales, rather than becoming its tragic “espèce fatale”, thus resolving the existential and planetary crises, fulfilling the spiritual, eschatological and scientific quest for the meaning and purpose of intelligent life.
22. Psychic symbiosis is potentially as significant as the eucaryote symbiosis, because the future survival of the planet’s entire living diversity is at stake and it is thereby manifesting cosmological symbiosis of the physical universe and mind at large, thus providing a means to avoid a mass extinction of biodiversity invoking the self-destruct scenario of the Fermi paradox.
Symbiosis and its Cosmological Significance
There are seven reasons why a symbiotic solution to the central enigma of cosmology and the hard problem of consciousness is critical and why the panpsychic postulate of the existential reality of the mind at large is valid.
According to Thomas Hertog (2023), in a post-humous follow on from Stephen Hawking's "Brief History of Time (1988), the new perspective that he has achieved with Hawking reverses the hierarchy between laws and reality in physics and is "profoundly Darwinian" in spirit. "It leads to a new philosophy of physics that rejects the idea that the universe is a machine governed by unconditional laws with a prior existence, and replaces it with a view of the universe as a kind of self-organising entity in which all sorts of emergent patterns appear, the most general of which we call the laws of physics."
Firstly, it is critically important in this discussion to understand how pivotal symbiosis is to the continuity of life in the universe. When we talk about survival of the fittest and the notion of the selfish gene seeking its own replication, these both occur in the context of natural selection, which is selection by symbiosis with the biosphere as a whole. Apart from a few extremophile archaea whose niches are predominantly geospheric, all evolutionary niches are biospheric, relative to the living diversity of all other species defining the niche, so natural selection is the key vector of biospheric symbiosis. Thus the capitalistic notion of survival of the fittest in the concrete jungle of human economic business as usual is a biospheric tragedy of the commons, (Hardin 1968) resulting only in planetary crisis, as a misaligned manifestation of human tribal origins. Species we see as predatory, or parasitic, also have symbiotic roles in ensuring the survival of their hosts and prey. For example, carnivore predation also avoids their herbivore prey populations going into boom and bust extinction by eating out all their vegetative food supplies.
All the interesting things in the universe happen at the edge of chaos. Cosmological symmetry-breaking causes the structure of molecular matter to be potentially fractal so with bio-elements we get the fractal structure of tissues. The edge of chaos is an abstract principle we can find throughout nature and cosmology. It is manifest in complementary regimes of order and chaos in both continuous and discrete dynamics and in discrete cellular automata where edge of chaos system 110 is a universal computer. It is a fundamental concept in key transitions in brain states in the Freeman model of neuronal dynamics and is a critical aspect of evolution's rise to climax diversity.
We have also noted that there is a beautiful Goldilocks scenario for the origin of life. Firstly the galactic nebulae are flush with the primordial molecules that readily polymerise to nucleic acid bases and amino acids. Secondly the acidic CO2 filled ocean reacts with crustal alkaline olivine to form chemical gardens just like the one's I made as a kid. They were rife on the early Earth and still occur today in the Lost City vents. Experiments have shown they can also concentrate biomolecules 1200 times up to biological concentrations and feed them on the chemical reactions produced.
Fig 59: Left: Emergence of archaea and
bacteria as complementary cellular life forms with differing membrane structures
from the common progenote (Lane & Martin 2003). Top right: The divergence
happened before the evolution of DNA polymerases (Leipe et al. 1999). Lower
right” The electro-chemiosmotic foundation of cellular life (Lane & Martin
2012).
The end result is the chemiosmotic origin of life where membrane electrochemistry set the whole process going. This gave rise to two fundamentally complementary life forms, archaea and bacteria as well as a lot of viruses emerging from a cooperative milieu called a progenote. The archaea are geological organisms in salt pans, hot smokers, methane swamps, and volcanic hot pools. They don't cause diseases. Bacteria are fast metabolic organisms that decompose, photosynthesise and respire and can be pathogenic.
Archaea have different kinds of cell walls from bacteria, so it is likely that each emerged from the progenote and became cellular as they escaped gaining metabolic autonomy via the membrane energetics. The DNA polymerases of archaea and bacteria are also different, so it also looks as if they evolved and diverged before DNA-based life had stabilised, at a point where DNA and RNA were flipping cyclically using the reverse transcriptase that is still in retroviruses like HIV and in the endogenous retroviruses in our genomes and in telomerase!
Life thus started out with two complementary life forms, but then about 2 billion years ago an edge of chaos event happened, when a very slowly growing archaean, with long protruding filaments started to grow alongside alpha-proteo-bacteria like our gut bacteria escherichia coli, using the relationship to live off their high value respiratory electron transport chain energy while also giving them value in return by exchanging metabolites. You can see close relatives of these two species top right in the fig 60 below. There are still some archaea and bacteria that do this today, but basically what happened is that the symbiosis was so successful that it wiped all trace of itself off the face of the Earth in a rapid quantum leap of evolution to form the higher nucleated eucaryote cells that make amoebo-flagellates and complex organisms. Without this symbiotic quantum leap complex life could not exist!
(a) Eucaryote endo-symbiosis In the 1960s Lynn Margulis (Sagan 1967, Margulis 1970, Mann 1991, Haskett 2014) first published the theory that both the mitochondria universal to eucaryotes and the chloroplast in plants were endosymbionts. She also suggested that the kinetochore essential for the eucaryote flagellum and ordered separation of the chromosomes was an endosymbiont. Genetic analysis has subsequently proved that all complex cells are symbiotic with their mitochondria, and plants are in a three-way symbiosis more anciently with mitochondria and more recently with chloroplasts.
About half our genes, the metabolic ones, were derived from the bacterium and are now in the nucleus while the informational processing genes came from the archaean. Now the mitochondria have only a skeleton set of key genes constituting the maternally inherited mitochondrial DNA that showed us that the African Eve was a San woman.
Fig 60: Symbiosis is ubiquitous and essential to human life. Top Left: Human fertilisation. Symmetry-broken sexuality is a form of intra-species symbiosis between the genetic sexes. Top right: The symbiosis between archaea and bacteria to form the eucaryotes, leading to all complex life (see major evolutionary transitions). Lower left: Transposable elements occupy nearly half the human genome, have co-evolved with humans since the formation of multi-celled organisms and have inherited key symbiotic roles. Lower Right: Homo sapiens can survive as a species only by symbiosis with the biosphere, within which our very existence inter-depends. Human religious and commercial dominion over nature runs completely counter to biospheric survival over cosmological time scales and is frank evolutionary suicide.
Elucidating how this happened genetically from previous organisms didn't happen until a few years ago. In 2019 Lokiarchaea, the first of the Asgard archaeans to be discovered was finally successfully grown in culture. It had originally been identified as a unique archaeal organism from microbial mud, dredged near Loki's Castle, a sea-floor hydrothermal vent field off the coast of Greenland. In a 2015 study in metagenomics, Ettema and his colleagues sequenced genetic fragments from the microbial portion in the sediment and assembled them into fuller genomes of individual species. One genome stood out. It was clearly a member of the archaea. But dotted throughout this genome were eukaryotic-like genes, named Lokiarchaea, after Loki, the trickster of Norse mythology (Lambert 2019 ). However, unbeknown to the metagenomics researchers, Hiroyuki Imachi and colleagues (Imachi et al. 2019) had been working since 2007 to cultivate microbes from deep-sea sediments. They built a bioreactor that mimicked the conditions of a deep-sea methane vent. Over 5 years, they waited for the slow-growing microbes in the reactor to multiply and then took samples placed these, along with nutrients, in glass tubes, which sat for another year before showing any signs of life. Genetic analysis revealed a barely perceptible population of Lokiarchaea. The researchers patiently coaxed the Lokiarchaea -- which took 2-3 weeks to undergo cell division -- into higher abundance and purified the samples. Over 12 years, in a breakthrough work, the researchers produced a stable lab culture (Prometheoarchaeum syntrophicum) containing only this new Lokiarchaeon and a different methane-producing archaeon in a symbiotic relationship. The researchers sequenced all the microbe's DNA, confirming that it does contain some genes that look like those found in eukaryotes. This has now enabled verification that the cultured genome contains the eucaryote-related genes from the metagenomics analyses and enables a much more retailed investigation of this critical group of organisms.
Fig 61: Top: Ettema's team (Zaremba-Niedzwiedzka et al. 2017) have found a superphyllum comprising archaea in diverse environments including marine sediments, aquifers and hot springs which have a phylogenetic relationship with eucaryotes and include genes for vesicle formation, membrane-trafficking components and cyto-skeletal functions including ESCRT and TRAPP domains. Bottom: Candidatus Lokiarchaeum ossiferum (Rodrigues-Oliveira et al. 2022). Insets: Evolutionary trees of ubitquitin genes UEV, E2, Vps22 and Vps25 (Hatano et al. 2022).
By carefully decanting cell cultures Rodrigues-Oliveira et al. (2022) have isolated a new Asgard Candidatus Lokiarchaeum ossiferum, which has a significantly larger genome compared with the single previously cultivated Asgard strain (Imachi et al. 2019). Wu et al. (2022) isolated two other Asgard species—from rock collected from a hydrothermal vent in the Gulf of California to sequence their complete genomes. These harbored mobile pieces of DNA that contained bacterial genes involved in metabolism, suggesting these elements played a critical role in transferring genes among life's major branches. Hatano et al. (2022) have discovered four ubiquitin-ESCRT gene complexes eukaryotic cells use to bend, cut up, and stitch together their membranes to link internal compartments. On the other hand, Knopp et al. (2021) calculated that Asgard archaea contributed as little as 0.3% of the protein families believed to exist in the common ancestor of the eukaryotes, suggesting that the stresses on the host drove eucaryote complexification, such as the nucleus, Golgi apparatus, and the evolution of sex (Raval, Garg & Gould 2022).
Fig 62: Mamavirus (mimiviridiae) with double membrane and capsid and parasitic phage.
Viral eucaryogenesis: There is also circumstantial evidence from key genes involved in both viral and eucaryote replication and transcription, that both the cell nucleus and mitosis and sexual meiosis arose from a lysogenic DNA virus than can integrate quiescently with the host cell genome, with a double membrane envelope, that invaded the endo-symbiont, or its archaeal precursor, and rather than just altering its transcription to the virus's advantage, went further and captured the archaean genetic organisation, to the extent that replication of the viral nucleus became coupled to replicating the entire archaean and viral genome, with transcription sequestered outside the nuclear envelope. Telomerases appear to have viral origins (fig 63). Many classes of nucleocytoplasmic large DNA viruses (NCLDVs) such as mimiviruses have the apparatus to produce m7G capped mRNA, as in eucaryotes, and contain homologues of the eukaryotic cap binding protein eIF4E. (Bell 2001, 2006, 2009, 2019, 2020, Chaikeeratisak et al. 2017a, b, Claverie 2006, Trevors 2003, Takemura 2020, Villareal & Witzany 2009). Asgard viruses and TEs have recently been described, with genes in transition between archaeal and eucaryote forms (Rambo et al. 2022, Wu et al. 2022). Indeed, viral eucaryogenesis could have been the founding event, leading to endosymbiosis.
(b) Sexuality (King 2016) is universal in both procaryote archaea and bacteria through a symbiosis between cellular and viral genomes, where plasmids and viruses also serve to exchange genetic material between hosts. Concomitant with the establishment of the archaeal-bacterial symbiosis, eucaryotes established symmetry-broken sperm-ovum dyadic sex to avoid genetic warfare in the symbiotic mitochondria, resulting in the two genetic sexes in each species (more in fungi), each becoming genetically interdependent with one another for survival and hence symbiotic. A key role of eucaryote sexuality is to enable a red-queen race between parasites and hosts, where sexually inherited genomic differences act to prevent total extinction of a monoclonal parthenogenetic host species, so that with very few exceptions, obligatory, or at lest cryptic (intermittent), sexuality is universal. But sperm-ovum sex is a prisoners' dilemma of highly asymmetric reproductive investments, leading to sexually antagonistic co-evolution, starkly displayed in humans in attempts by men to assert patriarchal dominion over female reproductive choice in an evolving climate of female gatherer reproductive cooperation.
(c) Cell-virus symbiosis is also rife in the human genome (King 1985, 1992, 2020c), where transposable elements (TEs) occupy 46%, of the human genome, making the TE content of our genome one of the highest among mammals, second only to the opossum genome with a reported content of 52%. LINE-1 elements which have co-evolved in the human germ line with a history running back to the Eukaryote origin, numbering 100,000-950,000 partially defective copies, around 100 of which remain fully active in humans, and their 300,000 dependent smaller fellow traveller Alu SINEs, together comprise 33% of the human genome. Long terminal repeat (LTR) retro-transposons 8% and DNA transposons 3%. Retroviruses related to HIV also exist in endogenous forms in the human germ-line, comprising up to 5 to 8%. Giving an evolutionary comparison, Dictyostellium has both LTR- and retro-transposons occupying 10% of a gene dense genome in which around 66% code for proteins (King 1985, Malicki et al. 2017). The survival of such a high proportion of transposable elements in such a tightly packed genome is strong evidence for symbiosis. In terms of the selfish gene (Dawkins 1976), transposable elements not withstanding, organism genomes are one huge genetic symbiosis, through organismic survival and selection.
Fig 63: Retrotransposons (Xiong & Eickbush 1990), which replicate by copy and paste transcription and reverse transcription of RNA to DNA and insertion into the genome, are present in all eukaryotic genomes, where they constitute the most abundant class of mobile DNA, in many cases, over 50% of the nuclear DNA, a situation that may have arisen in just a few Myr. They play a central role in the structure, evolution, and function of eukaryotic genomes. Retroviruses evolved during the early Palaeozoic Era, between 460 and 550 million years ago, providing the oldest inferred date estimate for any virus group (Hayward 2017). Although are found only in vertebrates, the wider grouping C of metaviridae, which have evolved from LTR retroransposons are ubiquitous across eucaryotes, populating diverse eukaryotic genomes, such as oomycetes, slime molds, fungi, plants and animals (Gorinsek et al. 2004).
Having integrated with our germ line, such elements both result in transpositions, which can cause mutations and genetic disease, but have also co-evolved to perform essential symbiotic tasks. Many of the historical transpositions have also caused adventitious mutations, giving the inserted elements key functions in coordinated gene expression. LINE-1 elements are key to forming the blastula, have key expression in neural progenitor cells and are essential in collapsing one of the two X-chromosomes which are poisonous to females except in their germ line. Endogenous retroviruses have provided membrane budding genes such as syncytin, which aid the formation of the syncytium, the super-cellular membrane that enables diffusion from the mother to the baby and immunity evasion, which avoids rejection of the embryo (Mi et al. 2000). Endogenous retroviral genes, such as suppressyn in humans, like in other mammals have been found to confer resistance against exogenous retroviral infection (Frank 2022). The recombination activating gene protein RAG1/RAG2, essential for the mutational variability of the vertebrate immune system, appears to have evolved from an ancient DNA transposon common to the metazoa (Agrawal et al 1998, Kapitinov & Jurga 2005).
Ivancevic et al. (2016) have traced the evolutionary tree of LINE-1 back to the founding eucaryotes as L1 elements occur in both plants and animal phylla spanning vertebrates, arthropods, and molluscs such as octopi where L1 transposition has specifically been associated with high-intelligence where transcription and translation measured for one of these elements resulted in specific signals in neurons belonging to areas associated with behavioural plasticity (Petrosino et al. 2022). L1, along with DNA transposons and LTR retroelements are ubiquitous across the arthropod kingdom (Petersen et al. 2019). Similarly for DNA transposons and LTR retrotransposons, so cell-virus symbiosis is foundational.
(d) Biospheric symbiosis: Organismic symbiosis is then realised in biospheric symbiosis of each species within the biosphere as a whole, in which natural and sexual selection is a measure of survival of the most successfully symbiotic species within the biosphere, whether parasites, prey, predators or hosts. In fact predators for example function to stabilise the biosphere from unstable fluctuation, by taking out the herbivore stragglers , avoiding the herbivores eating out their food supplies and starving in a boom and bust.
Fig 64: Biospheric Symbiosis: Top: Potato spindle viroid exemplifies competitive fitness in a single RNA molecule. Two male kangaroos in sexual combat. Cheetahs stealing meat from a lion. Second row: Logistic equation for rabbits in a finite pasture reaches chaotic instability leading to boom and species extinction bust. Lotka–Volterra equations between lynx and rabbits show coupled stable oscillations over 10-year seasons. A locust swarm consumes dominant vegetation. Third row: Climax plant diversity in the rain forest and large mammal diversity on the savannah. Bottom: The corona virus pandemic was caused by human misadventure, in a symbiotic bat virus becoming pathological by forcing wildlife together in unnatural conditions, leading to mutational species jumps.
There are outstanding examples of competitive survival of the fittest in evolution. Genetic systems as simple as transposable elements and simple molecular viroids, such as the potato spindle viroid above display competitive life cycles, giving rise to the notion of the “selfish gene” (Dawkins 1976). Competitive survival also resounds in male sexual combat, ensuring genetic fitness of the resulting offspring. Competition also occurs between species, as illustrated above right in cheetahs stealing from a lion.
However many of the most outstanding features
of tooth and claw that give us the image of the brutality of nature actually
abet biospheric symbiosis. Robert May (1976) used the logistic iteration to
model a rabbit population eating
the remaining grass
and we
get
.
Depending on the reproduction rate r we get
equilibrium, period doubling, and chaos, which eventually disrupts in a high
chaos bust at r = 4 , where
the population hits unity, consuming all the grass and then zero in extinction.
This shows that two component herbivore-plant ecosystems are intrinsically
unstable to lethal oscillation of the herbivore. When we include herbivore
predators, the Lotka–Volterra equations give us precipitous but sustainable
oscillations, as growing carnivore populations assimilate the herbivores to low
levels, leading to seasonal rebounds in each. Other population dynamics can become chaotic. Rogers et al. (2021) show that chaos is common >30% in natural ecosystems..
The lesson from all this, in naively simplistic mathematical terms, is that predators appear to be destructive to the herbivores, but actually ensure their long-term survival, by avoiding genocidal famines. Out on the savannah, these seasonal oscillations may be less predominant although, wet and dry seasons and the vagaries of the climate will also cause fluctuations. The carnivores tend to opportunistically take out stragglers, and some of the young and old, so we rise to a climax of interacting species.
The same picture is occurring with plant evolution. The rise of plant climax diversity is symbiotically moderated by the wide spectrum of herbivorous parasites and predators. The reason weedy plant don't rule the Earth is because insects, animals and fungal, bacterial and viral diseases all selectively target massive distributions of genetically similar plant material. The grasslands are subject to plagues of locusts. Plant diseases likewise target predominant species. Thus plant diversity, although also aided by environmental variance (Fung et al. 2016), is critically driven by insect predation, which explains why rainforests reach climax diversity rather than dominance of weedy species. Although plants also engage inter-individual and inter-species competition, Forrister et al. (2019) tested two mechanisms thought to underlie negative density dependence (NDD): plant competition for resources and attack by herbivores and confirm that it is the load of insect predation which results in the related species separation of climax diversity. After a long undisturbed period, wilderness habitats thus reach climax biodiversity because the overall interactive species-specific forms of natural and sexual selection promote maximal genetic symbiosis and optimal species diversity.
A founding reason why all complex life is sexual and why we thus have the dilemma of individual mortality is that the diversity of sexual individuals in a species protects the species in the peacock’s tail race against its sexual parasites.
We are just beginning to emerge from a Corona virus pandemic that occurred because of human impact on animals harbouring Corona viruses, forcing wild animals into close contact with other species, which would not occur in the wild, that gave rise to a new disease, when bat corona viruses, which are in a symbiotic relationship with bats, were knocked out of their symbiosis and became a world plague, affecting not only humans, but massive numbers of mink in dense mink farms and even rhinos. Bats roost in vast “urban” cave communities, where multiple pathological diseases, from rabies to corona viruses, occur endemically. Bats have figured out how to dampen down the corona presence to an asymptomatic level using interferons. With the rise of the Omicron variant, involving recombination with other corona virus elements, the pendulum is potentially swinging back to endemic symbiosis. Thus again the plagues and pestilences we abhor as the worst aspects of nature in the raw are actual vectors of symbiotic climax.
Lynn Margulis has been pivotal, both in establishing the correct view of eucaryote endosymbiosis, and in conveying the importance of symbiotic relationships in evolution generally, and finally in biospheric terms, in her cooperation with James Lovelock, in establishing the Gaia hypothesis.
Lynn Margulis opposed competition-oriented views of evolution, stressing the importance of symbiotic or cooperative relationships between species. She later formulated a theory that proposed symbiotic relationships between organisms of different phyla or kingdoms as the driving force of evolution, and explained genetic variation as occurring mainly through transfer of nuclear information between bacterial cells or viruses and eukaryotic cells. Her organelle genesis ideas are now widely accepted, but the proposal that symbiotic relationships explain most genetic variation is still something of a fringe idea (Mann 1992, Wikipedia).
Margulis also held a negative view of certain interpretations of Neo-Darwinism that she felt were excessively focused on competition between organisms, as she believed that history will ultimately judge them as comprising "a minor twentieth-century religious sect within the sprawling religious persuasion of Anglo-Saxon Biology." She wrote that proponents of the standard theory "wallow in their zoological, capitalistic, competitive, cost-benefit interpretation of Darwin – having mistaken him for ... Neo-Darwinism, which insists on [the slow accrual of mutations by gene-level natural selection], is in a complete funk.” (ibid)
Margulis met with Lovelock, who explained his Gaia hypothesis to her, and very soon they began an intense collaborative effort on the concept. One of the earliest significant publications on Gaia was a 1974 paper co-authored by Lovelock and Margulis, which succinctly defined the hypothesis as follows: "The notion of the biosphere as an active adaptive control system able to maintain the Earth in homeostasis we are calling the 'Gaia hypothesis.’" Like other early presentations of Lovelock's idea, the Lovelock-Margulis 1974 paper seemed to give living organisms complete agency in creating planetary self-regulation, whereas later, as the idea matured, this planetary-scale self-regulation was recognized as an emergent property of the Earth system, life and its physical environment taken together (Lovelock 1972, Lovelock & Margulis 1974).
This view is reinforced by the dynamical relationships for example between predators and their herbivore prey, where the assumption of an exploitative relationship belies the fact that the population dynamics of each, particularly with the carnivores taking out the stragglers, avoids the herbivores entering boom and bust by denuding the landscape of plants they depend on.
Ultimately, society and culture are also examples of symbiotic survival, however human emergence has been fraught with species-focused selection, leading to egotistical consciousness, tribal and civil warfare, as well as sexual wars of dominance between the male and female sexes, in which patriarchy has compromised the sexual prisoners’ dilemma, inhibiting female reproductive choice essential for XY-based evolution and breaching human equilibrium with the biosphere, in exponentiating devastation of the natural habitats of the planet, climate crisis and resource crisis. The prosocial effects of psilocybe species have also been proposed to have played a role in the emergence of human culture (Rodríguez & Winkelman 2021). The natural correction to this scenario comes from the complex sensitivity of conscious existence not being the exclusive dominant possession of a single species Homo sapiens, but is achieved in psychic symbiosis.
Fig 65: A
spectrum of natural psychoactive substances are all of optimal activity and not
superseded by synthetics except for LSA in morning glory which is superseded by
LSD and to a certain extent muscimol is eclipsed by GABA-ergic Z-drugs. This
illustrates efficient but incomplete biospheric evolution of psychoactives.
Pink: corresponding natural neurotransmitters. Blue: synthetic pharmaceuticals.
(e) Psychic symbiosis with entheogenic species is a well-established reality. Although traditional use of mushrooms and peyote has tended to involve collection from the wild, since their rediscovery, sacred mushrooms have become symbiotically cultivated worldwide. Cannabis indica, Papaver somniferum and Erythroxylon coca have each had several millennia of cultural cultivation. Salvia divinorum originated in the Oaxaca region of Mexico, where it has been cultivated and used for centuries by the Mazatec people as a healing herbal remedy including Maria Sabina herself, and in religious ceremonies. The species has so adapted to being kept as hidden cultivars, that an event after the pollen tube reaches the ovary is aberrant and no fully developed nutlet has been collected from a Mexican plant.
Fig 65 shows a variety of species bearing psychoactive substances. Certain synthetic molecules such as selective serotonin uptake inhibitor (SSRI) anti-depressants, which inhibit serotonin transporters and tricyclics are absent, but, apart from the lysergic acid amide (LSA) in Ipomoea, these substances are optimal, in the sense that no synthetic drug has effectively superseded them and many remain essential medicines. Most are receptor agonists, or antagonists, for example the psychedelics psilocin, dimethyl-tryptamine (DMT) and mescaline are serotonin 5HT2a receptor super-agonists, but cocaine inhibits dopamine transporters increasing pleasure and alertness and cathinone has similar stimulant effects to amphetamine by activating the trace-amine receptor (TAAR1) responsible for regulating dopamine and nor-epinephrine levels via transporters. By and large those synthetics which transcend the activity of these natural substances, from methamphetamine to fentanyl and synthetic cannabinoids, have markedly more damaging social effects. Even cocaine in its natural context is a revered spiritual ally, for example the Kogi of the Andean cloud forest, who use coca as their principal spiritual ally, believe that natural coca civilises men. Tobacco (not included in the figure) agonises nicotinic acetyl choline receptors, while scopolamine antagonises muscarinic ones. Morphine agonises μ–δ-opioid receptors while salvinorin-A agonises κ-opioid receptors. THC partially agonises CB1, CB2 anandamide receptors in neurons and neuroglia. Caffeine antagonises adenosine receptors, blocking effects of fatigue.
This is an expression of symbiotic edge of climax. This doesn't mean that entheogens are an exclusive route to the cosmic mind, but they are in my view sang raal – royal blood. Just like the fractal molecular architecture of the H–CNO bio-elements are a sang raal of biogenesis, the entheogens are sang raal of biospheric union – they are a genuine spiritual experience evoked by union with and interdependence with another species.
Of course this is not the exclusive or only route to moksha. We can also do deep transcendental meditation, but full blown moksha is rare and generally a more controlled experience of union, which tends to invoke mind-sky mysticism in which humanity remains the dominant pinnacle of divinity under deity. In some religions such as the Jains, all life is revered but it is linked to the idea of reincarnation and the life forms are simply sentient beings rather than genetic biological organisms. Reincarnation is really an opt out clause for the rarity of moksha based on the moral law of karma. And yes it is also a manifestation of the animistic inclusion of souls of all beings which is good. But entheogens are prima facie empirical psychic symbiosis because moksha is achieved in sacred interaction with another species, closing the biospheric symbiotic circle.
Fig 66: The Huichol nierika or portal to the "spirit
world”:
Cosmological symbiosis realised through psychic symbiosis.
(f) Cosmological symbiosis. This provides a basis for recognising that symbiosis is a foundational principle of the interactive consummation of the physical universe, invoked as a key manifestation of complementarity, evident in the eucaryote symbiosis between archaea and bacteria, sexual complementarity, and the symbiotic relationship between all living species and the biosphere as a whole, on which we all co-depend. This then becomes extended in the following description as a cosmological principle, both in psychic symbiosis with entheogenic species and the ensuing symbiosis between the organismic and cosmic mind and between the cosmic mind and the physical universe as a result of human symbiosis with the cosmos, leading to planetary reflowering and abundance over evolutionary and cosmic time scales.
Secondly, a key element of this description is that it gives a succinct, biologically realisable account of how the subjective aspect of reality i.e the panpsychism in quanta becomes coherently evoked in living systems, revealing a coordinated functional relationship with the physical universe.
Fig 67: An extreme example of single-celled
eucaryote adaption to a quantum mode. The dinoflagellate Nematodinium possesses
an occulum forming an eye, with a retina made from coopted chloroplast light
sensors and a lens with inset wave plate made from mitochondrial membranes (Gavelis et al.
2015).
This is a three stage-process, (1) with the formation of excitable cells in both archaea and bacteria. (2) with the symbiosis between archaea and bacteria to form complex eucaryote cells we reach the emergence of cellular consciousness. With cell organelles and nuclei, the excitable eucaryote cell gains the full edge-of-chaos sentience associated with physical quantum modes, from light, molecular vibration and the perturbation of chemical orbitals on the excitable membrane, leading to sensory organelles, social signalling, epigenesis and genetic evolution modified by cellular sentience. This is where the major quantum leap of consciousness takes place. (3) We reach organismic consciousness through dynamic elaboration via neuronal coupling in multi-celled organisms, and genetic diversification of function with increasing organismic complexity, we arrive at the conscious brains of organisms, utilising coupled cellular sentience, as manifested in our subjective consciousness accompanying brain dynamics. This has in turn induced an explosive increase in complexity so that the human brain has around 1010 neurons with 1015 synapses, forming a massively parallel quantum computer, making transitions at the edge of chaos (King 2014) involving quantum measurements of its own wave excitations, through discrete pyramidal action potentials timed to the progression of wave coherence (Qasim et al. 2021) as highlighted in fig 78. This complexity has in climax species from humans, through dolphins to elephants reached a cosmological level unknown elsewhere in the universe than in the biota.
“The fact is, I don’t even know that you’re conscious.
The only thing I know beyond any doubt—and this is one of the central insights of Western philosophy—is Cogito ergo sum.
What Descartes meant is the only thing I’m absolutely sure of is my own consciousness”. (Chris Koch)
Fig 68: Upper row: Jumping spider guarding her young. Squid guarding her egg pouch. Carrying an egg mass of 2,000 to 3,000 eggs and hatchlings for six to nine months can make swimming difficult for Gonatus Onyx squid mothers. The Golden Brown Stink Bug mothers guard not only the eggs, but also the 1st instars until they become 2nd instars. Lower left: Cichlid fishes have evolved into over 1500 species in the isolated lakes of the Great Rift Valley. Competition is fierce, and this mother shelters her offspring in her mouth at any sign of danger and takes them to a safe spot to release. Lower right: Hierarchical rank of female dominant hyenas is a key social feature of survival. At 12 weeks young hyenas need to learn to observe the rank of all 60 in the clan. Here an alpha female with two young offspring, teaches them to extract a head-bobbing concession from an older adolescent of lower rank (BBC).
Just as we don’t directly perceive the subjective consciousness in others, but infer it in their lively, purposeful behaviour, in a combination of sentience and volitional will, which we sense we can subjectively identify as conscious, cellular subjective consciousness is universal but unrealised. We see subjective consciousness more easily in other mammals, such as our pets, but we also see it in the creatively extraordinary mating dances of birds and spiders because here conscious sexual selection hones the sheer creativity of evolution through mate choice. We also see in parenting and cooperative social activity in animal societies. We can also experience subtle expressions of conscious purposiveness in the collective mating songs of crickets in the field, and in the synchronised flashing of fireflies.
"To see a puppy playing [one] cannot doubt that they have free-will"
and if "all animals, then an oyster has and a polype.” (Darwin ex Smith 1978)
“The agency of all sorts of creatures is the most fascinating, self-explanatory, self-manifesting thing
and phenomenally and experientially the most indubitable fact in the world”
(Vetlesen “The Cosmology of the Anthropocene” 2019)
And evolution shows us that these patterns of purposive activity run all the way down the evolutionary tree to the first single-celled eucaryotes. Dictyostellium are single-celled free-living amoeba that show individual purposiveness in their feeding behaviour and have extensive behavioural and genetic homology to human phagocytes. Dictyostellium slugs also show coordinated purposeful behaviour in a highly active colony of a thousand aggregated myxamoeba. These are electrically excitable, pluripotent, motile stem-cells, engaging coordinated, motile actions and clear decision-making manoeuvres as a consensual organism. This is an organismic manifestation of cellular consciousness in action even though myxamoebae have only graded potentials. Individual amoeba likewise show purposeful activity very similar to human macrophages. This is the closest we can come to a scientific verification of the kind of consciousness we associate with higher animals in social single celled eucaryotes.
The problem facing verification is not that the complementary subjective aspect is fuzzy or vitalistic, or ill-defined, but that, by its very subjectivity it is not objectively evident just as we don’t see one another’s consciousness directly and it non-local and largely indivisible, as Buddhist philosophy suggests, forming encapsulated instances of a phenomenon complementary to the universe as a whole. Replication is thus achieved not through objective observation, but veridical verification by empirical experience. This is straightforward with other humans by mutual affirmation, but very difficult with single-celled species and even more so with individual quanta, which manifest subjectivity only through idiosyncratic individual particle trajectories which approach statistical average in the wave amplitude.
Thirdly, no matter how subtly we try to monitor brain states, and unravel their biology, chemistry and physics, including edge-of-chaos dynamics and quantum effects, subjectivity cannot be conjured up by an objective interaction of purely objective structures. No assembly of objective elements that has no subjective components can have subjective existence. We may find a dynamic structure of excitons, just as we do in subtle quantum experiments, such as weak quantum measurement and quantum entanglement, but none of these complex structures will have subjective nature if none of the elements do, as we learn from the failure of complex digital systems to demonstrate verifiable features of subjective existence. Therefore a purely objective description founded only on the brain is categorically intractable and incomplete. On the other hand, all physical experiences of the world around us are actually consensual forms of conscious experience, so it is clearly possible to construct a complete cosmology from consensual conscious experiences. A complementary description in which consciousness and the physical universe co-exist can thus solve the hard problem, while retaining all the empirical features in brain dynamics key to a biological realisation of the Cartesian theatre (Baars 1995, 1997). At the same time it resolves the quantum measurement problem through the subjective aspect collapsing the wave functions of the probability multiverse. Moreover the symbiotic cosmology depends only on the current status of the core model of physics, and the standard probability interpretation of the wave function, and does not need to invoke the anthropic cosmological principle (Barrow & Tipler 1988) although this is obviously consistent with it.
Fourthly, both a purely physical cosmology in which human actions are ruled by mindless physical circumstance and religious cosmologies ordained by the will of God place fundamental impediments on our personal autonomy. The materialist physical world view regards consciousness as merely an internal model of physical reality constructed by the brain and volitional will as a delusion having no real effect on the physical world. The religious view asserts that we do possess free will, but casts the entire universe as a moral test for God’s will under pain of dire punishment. The only way out of this dilemma is that subjective consciousness has an effect upon the world at the level of fundamental physics. In the symbiotic cosmology volitional will has a real part in determining the course of history, thus verifying our veridical autonomy of decision-making, i.e. free will, validating and completing our experience of the world around us. Indeed only in such a cosmology can personal autonomy have any real meaning, as opposed to
Fifthly, since we know we no longer exist in a flat Earth universe with beaten-dome firmaments, deity or its alternatives have to be envisaged in more subtle ways, as something that stands outside and beyond the universe but shapes it in some manner outside physical cosmology. It is also extremely unlikely that a God who created the universe as we now know it, with symmetry-breaking forces, galaxies permeating the heavens, and on Earth, evolution leading to climax genetic diversity, including parasites and hosts, predators and prey, did so as a simple moral test of obedience. But we do know that the one and only tangible entity that stands outside and beyond the objective physical universe is subjective conscious existence itself. In fact all religious notions, such as Heaven and Hell are consciously envisioned realms, just as animist spirit realms are conscious visionary experiences and all personal religious experiences of deity, that are not simply religious doctrine taken on faith, including all mystical transformative encounters come as conscious and generally visionary experiences.
All realisable hope of a tangible “deity” existing thus now resides in the realms of consciousness. Panpsychism also explains how "God consciousness" could arise as a conscious interaction with the universe as a whole. This is the same concept as the atman becoming one with Brahman in Indian spiritual philosophy. We have to accept that if such interaction is unreal, then tangible interaction with any form of “deity” is unreal. However if the panpsychic postulate is true then the mind at large has reality as the fully manifest form of the subjective aspect existence on a cosmic basis and then our individual conscious existences are extant as functionally encapsulated instances of the mind at large.
Sixthly, there are fundamental evolutionary reasons why the entheogenic species are bound to occur and why they may be able to induce a form of “primary consciousness” evoking cosmic consciousness or the mind at large, as a result of evolution in a universe governed by fractal laws of nature.
The core neurotransmitters involved are modified elementary amino acid amines going back to the origin of life. Their pathways have been conserved since the foundation eucaryotes, as social signalling molecules, to provide feedback modes ensuring the survival of the collective organism. The brain is effectively a close-knit social organism of neurons and neuroglia communicating almost exclusively through neurotransmitters, with the core pathways, such as serotonin and dopamine continuing to have key conserved survival-related modes to ensure a purely electrochemical brain doesn’t deviate from organismic survival. Evolution has honed this by natural selection, so that these modes, as expressed in the default mode network and others, focus on an emotional and cognitive dynamic that gives rise to what we consciously experience as ego. However, this has proven not to be hard wired, but like the senses, is adaptive. Moreover the highly-conserved evolutionarily role of serotonin in development from social amoeba to the human brain noted above means that the ancient roles of serotonin in development may maintain evolutionary forces in humans also favouring serotonin and other target neuronal circuits to favour collective, rather than individual survival.
Because individual consciousness is actually an encapsulated form of cosmic consciousness, modified forms of these neurotransmitters produced by other species among them psilocybe, lophophora and psychotria, are able to tweak the serotonin 5HT2a receptor system in such a way as to impede “secondary consciousness”, as in the DMN, allowing the conscious brain to revert to an ego-dissipated form of “primary consciousness”. It appears that, simply by doing so, a form of long-term potentiation results, which has lasting beneficial effects, by allowing the individual to “no longer see through a glass darkly”, in Paul’s words, but now “face to face, knowing even also as we are known”.
Seventhly, this makes the notion of unfolding from encapsulated individual consciousness into the universal cosmic consciousness of the mind at large, clearly and unambiguously identifiable with the traditional notion of moksha – escaping the round of birth and death of mortal existence in union of Brahman and atman and in Buddhist satori [47]. The Upanishadic notion of atman or inner self, which can become united with Brahman the cosmic self, provides a central vision of this unification. However in the Buddhist perspective, the reality of the self is transcended by the unbroken wholeness and essential voidness of undivided consciousness central to the ability to experience moksha, which requires an approach where there is no dualistic distinction between subjective and objective aspects. Psychic symbiosis is again a complete realisation of the Shakti-Shiva tantra. As noted in fig 259, the moksha epiphany “is not something you can experience from without, neither is it something just within in the heart's desire”, but arises when you completely “let go and give your consciousness back to the universe”.
The Chan/Zen notion of Buddha-nature, encompasses the idea that the awakened mind of a Buddha is already present in each sentient being. This Buddha-nature was initially equated with the nature of mind, and meditations introspecting on perceiving the mind as a mirror, but this was challenged by Hui-neng in the Zen doctrine of no mind:
The body is the Bodhi-tree. The mind is like a mirror bright; Take heed to keep it always clean And let not dust alight. Shen-hsiu |
There is no Bodhi-tree Nor stand of mirror bright Since all is void, Where can the dust alight? Hui-neng |
The idea of the immanent character of the Buddha-nature took shape in a characteristic emphasis on direct insight into, and expression of this Buddha-nature. It led to a reinterpretation of Indian meditation traditions, and an emphasis on the idea that the teachings and practices are comprehended and expressed "suddenly" – “in one glance”, "uncovered all together" – "together, completely, simultaneously”, as opposite to gradualism, the original approach which says that following the dharma can be achieved only step by step, through an arduous practice, possibly taking several lifetimes. This attests to the validity of entheogenic experiences giving sudden insight of lasting value, contradicting the mistaken notion that genuine enlightenment can be achieved only through a supreme effort of dispassionate top down control through mindfulness and suppression of ego in favour of compassionate equanimity.
Fanaa (Arabic: فناء fanāʾ ) "to die before one dies" in Sufism is the "passing away" or "annihilation" (of the self). Some Sufis define it as the annihilation of the human ego before God, whereby the self becomes an instrument of God's plan in the world (Baqaa). Other Sufis interpret it as breaking down of the individual ego and a recognition of the fundamental unity of God, creation, and the individual self. Persons having entered this enlightened state are said to obtain awareness of an intrinsic unity (Tawhid) between Allah and all that exists, including the individual's mind – being united with the One or the Truth. This second interpretation is condemned as heretical by orthodox Islam. al-Hallaj was crucified when he cried: "ana al-Haqq - I am the truth” and preached overthrow of the Caliphate:
I am He whom I love, and He whom I love is I:
We are two spirits dwelling in one body.
If thou seest me thou seest Him,
And if thou seest Him thou seest us both"
(al-Hallaj Armstrong 1993 263).
Moksha also lies at the source of shamanism and visionaries who initiate and inspire major religions, as exemplified by Yeshua's statements in the Gospel of Thomas – “the kingdom is inside of you, and it is outside of you. When you come to know yourselves, then you will become known, and you will realize that it is you who are the sons of the living father” (3) – “It is I who am the light which is above them all. It is I who am the all. From me did the all come forth, and unto me did the all extend. Split a piece of wood, and I am there. Lift up the stone, and you will find me there” (77). The “ultimate reality” experienced in quantum change experiences also has parallels with the Christian Holy Spirit.
Because psychedelics play directly into the visionary state, in an intense, but consciously negotiable experience, with outstanding transcendent features, it is natural that they should be regarded as central tools, sine qua non, in the discovery process of the central enigma of existential cosmology – the role and function of consciousness in the universe, complementing projects such as the LHC seeking to elucidate the foundations of physical cosmology.
Erwin Schrödinger (1944) in dealing with the paradox of many minds in one world stated:
“There is obviously only one alternative, namely the unification of minds or consciousnesses. Their multiplicity is only apparent, in truth there is only one mind.” “Mind is by its very nature a singulare tantum [48]. I should say: The overall number of minds is just one. I would say it is indestructible, since it has a very peculiar time table, namely mind is always now. There is really no before and after for mind. There is only a now that includes memories and expectations”
Schrödinger quotes Persian Sufi mystic Aziz Nasafi, enlightening the darker gnostic beliefs of Heracleon:
“On the death of any living creature, the spirit returns to the spirit world and the body to the bodily world. In this way however, only the bodies are subject to change. The spiritual world is one single spirit who stands like unto a light behind the bodily world and who, when any single creature comes into being, shines through it as through a window.According to the kind and size of the window, less or more light enters the world. The light itself however remains unchanged”.
Fig 69:
Symbiotic cosmology was prefigured
in George Greenstein’s “Symbiotic Universe” (1988).
The reality of the mind at large is consistent with the biological and physical reality of a human brain in a transformative state where the brain processes supporting consciousness are freed from their boundary constraints and become unbundled from subject-object polarisation. Since the the only manifestations of subjective consciousness we know of in the universe are the biota, organismic consciousness, particularly in such mental states may be the key and perhaps only realisable way that cosmological consciousness of the universe at large can become fully manifest.
We know that the Cartesian theatre of consciousness (Baars 1997) is a complex affair. It is not just the external senses of sight, sound, touch and smell which mediate varying quantum modes – photons, phonons and molecular orbital perturbations. It includes emotions, bodily sensations, trains of conscious thought, involving semantic, symbolic and auditory dimensions. Thought is visual, verbal and abstract. We know we experience these subjective modes together, as a totality, in the midst of a dynamic encounter in the real world. But we also know we can experience intense situations in dreams that are perceived as real rather than merely imagining something. We also experience visionary states which may have complex scenes and encounters, but also other more exotic abstract or ecstatic states of consciousness, unbound from these same constraints also having veridical reality value and that these can approach a state of moksha. Some aspects of our sentience are also shaped by the varying types of receptors for each of the senses, and the way these are processed in wave excitations and action potentials in the nervous system.
This makes it obvious that major aspects of the form of our conscious life and of our brain processing shape the human nature of consciousness. What is critically at stake is the foundation subjective nature of experiential consciousness, complementing the physics, not the particular human evolutionary design of the encapsulation. It is obvious that, even for a person in a state of samadhi, their cosmic consciousness is appearing through a human viewpoint. For example we more easily identify with mammals as we share their limbic system emotions and find arthropods more alien.
On the other hand, we have seen that the key transition in the emergence of subjective consciousness is the founding eucaryote cell. This means that both the features of neuronal excitation and the roles of neurotransmitters are widely shared across all metazoan phyla, with some secondary variation. Thus the physics evoking subjective consciousness in an arthropod, or an octopus is fundamentally homologous to ours, despite major differences in neural circuit design. A key example is the role of serotonin, where we find it maintains the development of the fruiting body sporulation tip in Dictyostellium, and likewise plays in humans the role of a fundamental organiser of human brain structure, from the neural groove, to differentiating the layers of the prefrontal cortex. Thus, although humans are very very different from slime moulds, core aspects of their excitability and social signalling are strongly conserved and remarkably similar.
The fact that people have such similar experiences during quantum change attests to their universality and potentially cosmological status and hence to the validity of psychedelics as a key oracle for discovery of the foundations of consciousness in the mind at large. Realising the symbiotic mind at large solves the hard problem of consciousness and the central enigma of existential cosmology, the nature and purpose of conscious existence, thereby resolving the scientific, eschatological and theistic quests in one “fell swoop”, in a compact, coherent synthesis.
Symbiotic existential cosmology is thus empirically verified in three principal ways:
(1) Existential cosmology, as an interaction between subjective consciousness and physical reality, is verified through affirmation by empirical experience between conscious human volitional agents, in the same manner that legal transactions, such as sworn evidence, fiduciary duties of care and terms of trust are veridically affirmed. This is necessary for applying Occam's razor to eliminate materialistic cosmologies failing the volitional efficacy test fundamental to human decision-making autonomy and personal responsibility for our actions upon the world.
(2) The extent of subjective consciousness across the evolutionary tree can be verified through empirical observation of volitional purposiveness in eucaryotes.
(3) Cosmological symbiosis is verified by statistical evaluation of quantum change experiences of “ultimate reality” in psychedelic and meditational states, as demonstrated in studies by the Johns Hopkins team and others.
Stanislav Groff (1980) notes: “In one of my early books I suggested that the potential significance of LSD and other psychedelics for psychiatry and psychology was comparable to the value the microscope has for biology or the telescope has for astronomy. My later experience with psychedelics only confirmed this initial impression. These substances function as unspecific amplifiers that increase the cathexis (energetic charge) associated with the deep unconscious contents of the psyche and make them available for conscious processing. This unique property of psychedelics makes it possible to study psychological undercurrents that govern our experiences and behaviours to a depth that cannot be matched by any other method and tool available in modern mainstream psychiatry and psychology. In addition, it offers unique opportunities for healing of emotional and psychosomatic disorders, for positive personality transformation, and consciousness evolution”.
I have now to taken this to its cosmological conclusion, by taking a Galilean interpretation of Groff’s position that also cosmologically inverts the Copernican principle [49]. That is, I am asserting that subjective consciousness does make human observers, by possession of it, privileged observers of, and participants in the universe, and that a cosmic view of this privileged position is both achievable and facilitated through psychedelics and that this knowledge or “knowing” invokes upon us a primary responsibility to care for and ensure the survival and flowering of sentient life and consciousness within the universe throughout the generations of life.
All the evidence that we have at our disposal indicates that subjective consciousness is manifest in the biota and that only the biota possess it in the fully fledged form we witness it. Notwithstanding the cosmic web, which has fractal similarities to neural tissue (Vazza & Feletti 2020), and the hypothetical idea that some small stars might be conscious (Matloff 2016), the brain appears to be the most complex coherent system in the universe, as the cumulative manifestation of all the forces of nature interacting in consummation of their fractal interaction on all scales, from cosmological symmetry-breaking, running through quarks, protons and neutrons, atomic nuclei, atoms and molecules, to molecular complexes such as the ribosome and membrane, to cell organelles, cells, tissues, organs such as the brain, societies of organisms and the symbiotic biosphere. We know of no other process in the universe, from black holes to stars and the gas clouds of nebulae, or even dark matter, that cumulatively complete the interaction of the fundamental forces in this way.
The evidence also indicates that, while psychedelics create diverse forms of altered conscious states, spanning the entire spectrum, from the paradisiacal to the diabolical, requiring careful guidance, and having significance varying from the sublime to the ridiculous, they constitute humanity’s most powerful research avenue to discover what the inner dimensions of conscious experience are, complementing experiences of dreaming and other states, with a central avenue which can be induced and explored, both scientifically and personally by the waking mind. And finally, underlying these diverse visionary phenomena is a deeper enlightenment at the centre of this cyclone, which has the potential to resolve what the existential status of conscious experience is cosmologically, in the experience of moksha, transcending the cycle of birth and death in mystical transformative experiences of long-lasting psychic benefit, whose common features imply they are accessing a common primary conscious condition.
Working to validate entheogenic experiences and conscious states generally requires a different type of verification from physical to establish a phenomenology of the subjective psychedelic state. Peoples experiences of daydreaming and dreaming sleep confirm that very real events can occur, particularly in dreaming. The nature of space and time in dreaming is also undetermined as some people report precognitive dreams (Dunne 1927). We don't usually assess the reality value of internal mental states as the same as everyday experience of the world, but they still often possesses features which we recognise and identify as having veridical reality. Likewise some psychedelic states form a diverse population from frank delusions to common claims of profound experiences of a life-changing nature.
Ralph Metzner's (2017) radical empiricism approach gives the foundations of how to assemble such a phenomenology:
“Over 100 years ago the American philosopher William James said that radical empiricism would not dismiss any observations just because we don’t have a theory or model to explain them in our current worldview. For that reason, James allowed drug experiences (with nitrous oxide), mystical visions, parapsychological or psi-phenomena and telepathic communications, into science for consideration and further observations. HH the Dalai Lama has formulated a similar epistemology, by his notion of “first person empiricism” – empirical observations made with our own senses. Repeated observations of similar situations by the same observer or similar observers gradually make the observations less “purely subjective” and step-by-step more objective. So the basic formula of radical empiricism is objective = subjective plus one or more. If only one person sees something, it remains purely subjective, like a fantasy or a dream. But if at least one other person sees it and can say “yes, I see it” it becomes a little bit more objective, and this can have profoundly healing implications. ... So when people speak about “entities” or “spirits” or “demons” or “visions” or “hallucinations” we want to first separate the observations from the speculations. Then we can gather further observations – which might have been recorded in various books or in works of art, and start the process of making systematic comparisons. ... Our intuitions and subtle inner perceptions can be mistaken just like any outer perceptions – and can and should always be subject to repetition and repeated verification”.
Fig 70: Entheogens are a/the key instrument providing the subjective conscious equivalent to the LHC’s role in physical cosmology. Left: Curandero (Luke Brown). Right: Particle shower (Pb ion collision LHC). Just as there are many visions, surrounding one nierika portal to the ‘spirit’, world, so there have been a multitude of particle showers, for one Higgs particle discovery.Future generations will feel betrayed by Western culture, discovering that:
(a) A scientific discovery which has features consistent with being the subjective equivalent of the LHC [50], a consciousness reactor that could give us access to the core cosmological secrets of the universe, had been suppressed for half a century by the very culture that claims to be the climax of scientific enlightenment.
(b) That this had happened because this very discovery was perceived by political leaders to be threatening to a consumption-driven society based on venture capital exploitation of the planet’s resources for financial gain, combined with adherence to a religious belief which requires the drinking of the saviour’s blood and eating His flesh because “without the shedding of blood there is no remission of sin.”
(c) That this repression, reminiscent of the dark ages, has intentionally acted in such a way as to seek to prevent us from attaining moksha or psychic union with the cosmos, the very ideal that lies at the heart of the spiritual and religious quest for enlightenment and transcendence, because it risks unraveling the status quo.
(d) That Entheogens giving the respect due could also have critically helped alleviate the planetary crisis that has ensued from human evolutionary emergence as a tribal society, and unfold a symbiotic psychic relationship with reality, just as we are obligately symbiotic with the food and medicinal species on which we depend.
(e) That instead of helping enable humanity to ensure its survival and the survival of the diversity of life on this planet on which humanity depends, this repression had caused a 50 year delay in addressing a climate and biodiversity crisis, significantly risking the economic welfare, health and survival of these future generations.
This appears to be the situation we are just beginning to emerge from, and yet are still facing today. For the planet to continue to survive over evolutionary and cosmological time scales, climax consciousness is, and has to be, fully sensitive as a complex system to the biosphere. This is necessary to be able manifest a cosmologically conscious response, consistent with perennial survival on evolutionary time scales. The fully evolved consciousness is thus, in its complete form symbiotically biospheric. Its fullest and complete manifestation is biospheric and cannot, in evolutionary terms, be the exclusive provenance of a single dominant species, Homo sapiens.
The Conscious Brain, and the Cosmological Universe[24]
Solving the Central Enigma of Existential Cosmology
Chris King – 21-6-2021
In memory of Maria Sabina and Gordon Wasson
Contents
1 The Cosmological Problem of Consciousness
2 Psychedelic Agents in Indigenous American Cultures
3 Psychedelics in the Brain and Mind
4 Therapy and Quantum Change: Scientific Results
5 Fractal Biocosmology, Darwinian Cosmological Panpsychism, Cosmological Symbiosis
Abstract:
This article resolves the central enigma of existential cosmology – the nature and role of subjective experience – thus providing a direct solution to the "hard problem of consciousness". This solves, in a single coherent cosmological description, the core existential questions surrounding the role of the biota in the universe, the underlying process supporting subjective consciousness and the meaning and purpose of conscious existence. This process has pivotal importance for avoiding humanity causing a mass extinction of biodiversity and possibly our own demise, instead becoming able to fulfil our responsibilities as guardians of the unfolding of sentient consciousness on evolutionary and cosmological time scales.
The article overviews cultural traditions and current research into psychedelics [25] and formulates a panpsychic cosmology, in which the mind at large complements the physical universe, resolving the hard problem of consciousness extended to subjective conscious volition over the universe and the central enigmas of existential cosmology, and eschatology, in a symbiotic cosmological model. The symbiotic cosmology is driven by the fractal non-linearities of the symmetry-broken quantum forces of nature, subsequently turned into a massively parallel quantum computer by biological evolution (Darwin 1859, 1889). Like Darwin’s insights, this triple cosmological description is qualitative rather than quantitative, but nevertheless accurate. Proceeding from fractal biocosmology and panpsychic cosmology , through edge of chaos dynamical instability, the excitable cell and then the eucaryote symbiosis create a two-stage process, in which the biota capture a coherent encapsulated form of panpsychism, which is selected for, because it aids survival. This becomes sentient in eucaryotes due to excitable membrane sensitivity to quantum modes and eucaryote adaptive complexity. Founding single-celled eucaryotes already possessed the genetic ingredients of excitable neurodynamics, including G-protein linked receptors and a diverse array of neurotransmitters, as social signalling molecules ensuring survival of the collective organism. The brain conserves these survival modes, so that it becomes an intimately-coupled society of neurons communicating synaptically via the same neurotransmitters, modulating key survival dynamics of the multicellular organism, and forming the most complex, coherent dynamical structures in the physical universe.
This results in consciousness as we know it, shaped by evolution for the genetic survival of the organism. In our brains, this becomes the existential dilemma of ego in a tribally-evolved human society, evoked in core resting state networks, such as the default mode network, also described in the research as "secondary consciousness", in turn precipitating the biodiversity and climate crises. However, because the key neurotransmitters are simple, modified amino acids, the biosphere will inevitably produce molecules modifying the conscious dynamics, exemplified in the biospheric entheogens, in such a way as to decouple the ego and enable existential return to the "primary consciousness" of the mind at large, placing the entheogens as conscious equivalents of the LHC in physics. Thus a biological symbiosis between Homo sapiens and the entheogenic species enables a cosmological symbiosis between the physical universe and the mind at large, resolving the climate and biodiversity crises long term in both a biological and a psychic symbiosis, ensuring planetary survival.
The Cosmological Axiom of Primal Subjectivity
We put this into precise formulation, taking into account that the existence of primary subjectivity is an undecidable proposition, from the physical point of view, in the sense of Godel, but is empirically certain from the experiential point of view, we come to the following:
(1) We start on home ground, i.e. with human conscious volition, where we can clearly confirm both aspects of reality – subjectively experiential and objectively physical.
(2) We then affirm as empirical experience, that we have efficacy of subjective conscious volition over the physical universe, manifest in every intentional act we make, as is necessary for our behavioural survival – as evidenced by my consciously typing this passage, and that this is in manifest conflict with pure physicalism asserting the contrary.
(3) We now apply Occam's razor, not just on parsimony, but categorical inability of pure materialism, using only physical processes, which can only be empirically observed, to deal with subjective consciousness, because this can only be empirically experienced and is private to observation. This leads to intractability of the hard problem of consciousness. Extended to the physicalist blanket denial of conscious physical volition, which we perceive veridically in our conscious perception of our enacted intent, this becomes the extended hard problem. Classical neuroscience accepts consciousness only as an epiphenomenon – an internal model of reality constructed by the brain, but denies volition, as a delusion perpetrated by evolution to evoke the spectre of intentional behaviour.
(4) We then scrutinise the physical aspect and realise we cannot empirically confirm classical causal closure the universe in brain dynamics because: (a) the dynamics is fractal to the quantum-molecular level so non-IID processes don't necessarily converge to the classical and (b) experimental verification is impossible because we would need essentially to trace the neurodynamics of every neuron, or a very good statistical sample, when the relevant dynamics is at the unstable edge of chaos and so is quantum sensitive. Neither can we prove consciousness causes brain states leading to volition, because consciousness can only be experienced and not observed, so it’s a genuine undecidable proposition physically.
(5) This sets up the status of: “Does subjective conscious volition have efficacy over the universe? ” to be an empirically undecidable cosmological proposition from the physical perspective, in the sense of Godel. From the experiential perspective however, it is an empirical certainty.
(6) We therefore add a single minimal cosmological axiom, to state the affirmative proposition – “Subjective conscious volition has efficacy over the physical universe”. We also need to bear in mind that a physicalist could make the counter proposition that it doesn’t, and both could in principle be explored, like the continuum hypothesis in mathematics – that there is no infinite cardinality between those of the countable rationals and uncountable reals [1].
(7) We now need to scale this axiom all the way down to the quantum level, because it is a cosmological axiom that means that the universe has some form of primal subjective volition, so we need to investigate its possible forms. The only way we can do this, as we do with one another about human consciousness, where we can’t directly experience one another’s consciousness, is to make deductions from the physical effects of volition – in humans, organisms, amoebo-flagellates, prokaryotes, biogenesis, butterfly effect systems and quanta.
(8) We immediately find that quantum reality has two complementary processes:
(a) The wild wave function which contains both past and future implicit “information” under special relativity, corresponding to the quantum-physical experiential interface of primal subjectivity.
(b) Collapse of the wave function, which violates causality and in which the normalised wave power space leaves the quantum total free will where to be measured, which is the quantum-physical volitional interface of primal subjectivity.
(9) Two potentially valid cosmologies from the physical perspective, but only one from the experiential perspective:
As with any undecidable proposition, from the objective perspective, pure physicalists can, on the one hand, continue to contend that the quantum has no consciousness or free will and that uncertainty is “random” and cite lack of an obvious bias violating the Born interpretation, and develop that approach, thus claiming volition is a self-fulfilling delusion of our internal model of reality. But Symbiotic Existential Cosmology can validly argue that uncertainty could be due to a complex quasi-random process, e.g. a special relativistic transactional collapse process, which by default, the quantum, by virtue of its wave function context does have “conscious” free will over, allowing us and the diversity of life to also be subjectively conscious and affect the world around us, unlike the pure materialist model.
An Accolade to Cathy Reason
The first part of the answer to the continuum hypothesis CH – that there is no infinite cardinal between the rationals and reals – was due to Kurt Gödel. In 1938 Gödel proved that it is impossible to disprove CH using the usual axioms for set theory. So CH could be true, or it could be unprovable.
In 1963 Paul Cohen finally showed that it was in fact unprovable.
The first part of the answer to the cosmological axiom CA – that subjective consciousness is a cosmological complement to the physical universe – was due to Cathy Reason. In 2016 she proved that it is impossible to establish certainty of consciousness through a physical process. So CA could be false, or it could be unprovable. In 2019, and 2021, with Kushal Shah, she proved the no-supervenience theorem – that the operation of self-certainty of consciousness is inconsistent with the properties possible in any meaningful definition of a physical system – effectively showing CA is certain experientially.
In 2023 in Symbiotic Existential Cosmology, Chris King finally showed that CA, in the form of conscious volition, is in fact unprovable physically, although it is certain experientially.
1 The Cosmological Problem of Consciousness
The human existential condition consists of a complementary paradox. To survive in the world at large, we have to accept the external reality of the physical universe, but we gain our entire knowledge of the very existence of the physical universe through our conscious experiences, which are entirely subjective and are complemented by other experiences in dreams and visions which also sometimes have the genuine reality value we describe as veridical. The universe is thus in a fundamental sense a description of our consensual subjective experiences of it, experienced from birth to death, entirely and only through the relentless unfolding spectre of subjective consciousness.
Fig 71: (a) Cosmic evolution of the universe (WMAP King 2020b).
Life has existed on Earth for a third of the universe’s 13.7 b ya lifetime. (b)
Symmetry-breaking of a unified superforce into the four wave-particle forces
of nature, colour, weak, electromagnetic and gravity with the first three
forming the standard model and with the weak-field limit of general relativity
(Wilczek 2015) comprising the core model. (c)
quantum uncertainty defined through wave coherence beats, (d) Schrödinger cat
experiment. Schrödinger famously said “The
total number of minds in the universe is one”, preconceiving Huxley’s notion of the mind at
large used as this monograph’s basis for
cosmological symbiosis. Quantum theory says the cat is in both live and dead
states with probability 1/2 but the observer finds the cat alive or dead,
suggesting the conscious observer collapses the superimposed wave function. (e)
Feynman diagrams in special relativistic quantum field theories involve both
retarded (usual) and advanced (time backwards) solutions because the Lorenz
energy transformations ensuring the atom bomb works have positive and negative
energy solutions .
Thus electron scattering (iv) is the same as positron creation-annihilation [26].
(f) Double slit interference shows a photon emitted as a particle passes
through both slits as a wave before being absorbed on the photographic plate as
a particle. The trajectory for an individual particle is quantum uncertain but
the statistical distribution confirms the particles have passed through the
slits as waves. (g) Cosmology of conscious mental states (King 2021a). Kitten’s Cradle a love song.
The religious anthropocentric view of the universe was overthrown, when Copernicus, in 1543 deduced that the Earth instead of being in the centre of the cosmos instead, along with the other solar system planets, rotated in orbits around the Sun. Galileo defended heliocentrism based on his astronomical observations of 1609. By 1615, Galileo's writings on heliocentrism had been submitted to the Roman Inquisition which concluded that heliocentrism was foolish, absurd, and heretical since it contradicted Holy Scripture. He was tried by the Inquisition, found "vehemently suspect of heresy", and forced to recant. He spent the rest of his life under house arrest.
The Copernican revolution in turn resulted in the rise of classical materialism defined by Isaac Newton’s laws of motion (1642 – 1726), after watching the apple fall under gravity, despite Newton himself being a devout Arian Christian who used scripture to predict the apocalypse. The classically causal Newtonian world view, and Pierre Simon Laplace’s (1749 – 1827) view of mathematical determinism “that if the current state of the world were known with precision, it could be computed for any time in the future or the past”, came to define the universe as a classical mechanism in the ensuing waves of scientific discovery in classical physics, chemistry and molecular biology, climaxing with the decoding of the human genome, validating the much more ancient atomic theory of Democritus (c. 460 – c. 370 BC). The classically causal universe of Newton and Laplace has since been fundamentally compromised by the discovery of quantum uncertainty and its “spooky" features of quantum entanglement.
In counterposition to materialism, George Berkeley (1685 – 1753) is famous for his philosophical position of "immaterialism", which denies the existence of material substance and instead contends that familiar objects like tables and chairs are ideas perceived by our minds and, as a result, cannot exist without being perceived. Berkeley argued against Isaac Newton's doctrine of absolute space, time and motion in a precursor to the views of Mach and Einstein. Interest in Berkeley's work increased after 1945 because he had tackled many of the issues of paramount interest to 20th century philosophy, such as perception and language.
The core reason for the incredible technological success of science is not the assumption of macroscopic causality, but the fact that the quantum particles come in two kinds. The integral spin particles, called bosons, such as photons, can all cohere together, as in a laser and thus make forces and radiation, but the half-integer spin particles, called fermions, such as protons and electrons, which can only congregate in pairs of complementary spin, are incompressible and thus form matter, inducing a universal fractal complexity, via the non-linearity of the electromagnetic and other quantum forces. The fermionic quantum structures are small, discrete and divisible, so the material world can be analysed in great detail. Given the quantum universe and the fact that brain processes are highly uncertain, given changing contexts and unstable tipping points at the edge of chaos, objective science has no evidential basis to claim the brain is causally closed and thus falsely conclude that we therefore have no agency to apply our subjective and consciousness to affect the physical world around us. By agency here I mean full subjective conscious volition, not just objective causal functionality (Brizio & Tirassa 2016, Moreno & Mossio 2015), or even autopoiesis (Maturana & Varela 1972).
The nature of conscious experience remains the most challenging enigma in the scientific description of reality, to the extent that we not only do not have a credible theory of how this comes about but we don’t even have an idea of what shape or form such a theory might take. While physical cosmology is an objective quest, leading to theories of grand unification, in which symmetry-breaking of a common super-force led to the four forces of nature in a big-bang origin of the universe, accompanied by an inflationary beginning, the nature of conscious experience is entirely subjective, so the foundations of objective replication do not apply. Yet for every person alive today, subjective conscious experiences constitute the totality of all our experience of reality, and physical reality of the world around us is established through subjective consciousness, as a consensual experience of conscious participants.
Erwin Schrödinger: Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental.
Arthur Eddington: The stuff of the world is mind stuff.
J. B. S. Haldane: We do not find obvious evidence of life or mind in so-called inert matter...; but if the scientific point of view is correct, we shall ultimately find them, at least in rudimentary form, all through the universe.
Julian Huxley: Mind or something of the nature as mind must exist throughout the entire universe. This is, I believe, the truth.
Freeman Dyson: Mind is already inherent in every electron, and the processes of human consciousness differ only in degree and not in kind from the processes of choice between quantum states which we call “chance” when they are made by electrons.
David Bohm: It is implied that, in some sense, a rudimentary consciousness is present even at the level of particle physics.
Werner Heisenberg: Is it utterly absurd to seek behind the ordering structures of this world a “consciousness” whose “intentions” were these very structures?
Andrei Linde: Will it not turn out, with the further development of science, that the study of the universe and the study of consciousness will be inseparably linked, and that ultimate progress in the one will be impossible without progress in the other?
The hard problem of consciousness (Chalmers 1995) is the problem of explaining why and how we have phenomenal first-person subjective experiences sometimes called “qualia” that feel "like something”, and more than this, evoke the entire panoply of all our experiences of the world around us. Chalmers comments (201) “Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.” By comparison, we assume there are no such experiences for inanimate things such as a computer, or a sophisticated form of artificial intelligence. Two extensions of the hard problem are the hard problem extended to volition and the hard manifestation problem how is experience manifested in waking perception, dreams and entheogenic visions?
Fig 71b: The hard problem's explanatory gap – an uncrossable abyss.
Although there have been significant strides in both electrodynamic (EEG and MEG), chemodynamic (fMRI) and connectome imaging of active conscious brain states, we still have no idea of how such collective brain states evoke the subjective experience of consciousness to form the internal model of reality we call the conscious mind, or for that matter volitional will. In Jerry Fodor’s words: “Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious.”
Nevertheless opinions about the hard problem and whether consciousness has any role in either perception or decision-making remain controversial and unresolved. The hard problem is contrasted with easy, functionally definable problems, such as explaining how the brain integrates information, categorises and discriminates environmental stimuli, or focuses attention. Subjective experience does not seem to fit this explanatory model. Reductionist materialists, who are common in the brain sciences, particularly in the light of the purely computational world views induced by artificial intelligence, see consciousness and the hard problem as issues to be eliminated by solving the easy problems. Daniel Dennett (2005) for example argues that, on reflection, consciousness is functionally definable and hence can be corralled into the objective description. Arguments against the reductionist position often cite that there is an explanatory gap (Levine 1983) between the physical and the phenomenal. This is also linked to the conceivability argument, whether one can conceive of a micro-physical “zombie” version of a human that is identical except that it lacks conscious experiences. This, according to most philosophers (Howell & Alter 2009), indicates that physicalism, which holds that consciousness is itself a physical phenomenon with solely physical properties, is false.
David Chalmers (1995), speaking in terms of the hard problem, comments: “The only form of interactionist dualism that has seemed even remotely tenable in the contemporary picture is one that exploits certain properties of quantum mechanics.” He then goes on to cite (a) David Eccles’ (1986) citing of consciousness providing the extra information required to deal with quantum uncertainty thus not interrupting causally deterministic processes, if they occur, in brain processing and (b) the possible involvement of consciousness in “collapse of the wave function” in quantum measurement. We next discuss both of these loopholes in the causal deterministic description.
Two threads in our cosmological description indicate how the complementary subjective and objective perspectives on reality might be unified. Firstly, the measurement problem in the quantum universe, appears to involve interaction with a conscious observer. While the quantum description involves an overlapping superposition of wave functions, the Schrödinger cat paradox, fig 71(d), shows that when we submit a cat in a box to a quantum measurement, leading to a 50% probability of a particle detection smashing a flask of cyanide, killing the cat, when the conscious observer opens the box, they do not find a superposition of live and dead cats, but one cat, either stone dead or very alive. This leads to the idea that subjective consciousness plays a critical role in collapsing the superimposed wave functions into a single component, as noted by John von Neumann, who stated that collapse could occur at any point between the precipitating quantum event and the conscious observer, and others (Greenstein 1988, Stapp 1995, 2007).
Wigner & Margenau (1967) used a variant of the cat paradox to argue for conscious involvement. In this version, we have a box containing a conscious friend who reports the result later, leading to a paradox about when the collapse occurs – i.e when the friend observes it or when Wigner does. Wigner discounted the observer being in a superposition themselves as this would be preceded by being in a state of effective “suspended animation”. As this paradox does not occur if the friend is a non-conscious mechanistic computer, it suggests consciousness is pivotal. Henry Stapp (2009) in "Mind, Matter and Quantum Mechanics" has an overview of the more standard theories.
While systems as large as 2000 atoms (Fein et al. 2019) that of gramicidin A1, a linear antibiotic polypeptide composed of 15 amino acids (Shayeghi et al. 2020), and even a deep-frozen tardigrade (Lee at al. 2021) have been found in a superposition of states resulting in interference fringes, indicating that the human body or brain could be represented as a quantum superposition, it is unclear that subjective experience can. More recent experiments involving two interconnected Wigners’ friend laboratories also suggest the quantum description "cannot consistently describe the use of itself” (Frauchiger & Renner 2018). An experimental realisation (Proietti et al. 2019) implies that there is no such thing as objective reality, as quantum mechanics allows two observers to experience different, conflicting realities. These paradoxes underly the veridical fact that conscious observers make and experience a single course of history, while the physical universe of quantum mechanics is a multiverse of probability worlds, as in Everett’s many worlds description, if collapse does not occur. This postulates split observers, each unaware of the existence of the other, but what kind of universe they are then looking at seems inexorably split into multiverses, which we do not experience.
In this context Barrett (1999) presents a variety of possible solutions involving many worlds and many or one mind and in the words of Saunders (2001) in review has resonance with existential cosmology:
Barrett’s tentatively favoured solution [is] the one also developed by Squires (1990). It is a one-world dualistic theory, with the usual double-standard of all the mentalistic approaches: whilst the physics is precisely described in mathematical terms, although it concerns nothing that we ever actually observe, the mental – in the Squires-Barrett case a single collective mentality – is imprecisely described in non-mathematical terms, despite the fact that it contains everything under empirical control.
In quantum entanglement, two or more particles can be prepared within the same wave function. For example, in a laser, an existing wave function can capture more and more photons in phase with a standing wave between two mirrors by stimulated emission from the excited medium. In other experiments pairs of particles can be generated inside a single wave function. For example an excited Calcium atom with two outer electrons can emit a blue and a yellow photon with complementary polarisations in a spin-0 to spin-0 transition, as shown in fig 72(8). In this situation when we sample the polarisation of one photon, the other instantaneously has the complementary polarisation even when the two detections take place without there being time for any information to pass between the detectors at the speed of light. John Bell (1964) proved that the results predicted by standard quantum mechanics when the two detectors were set at varying angles violated the constraints defined by local Einsteinian causality, implying quantum non-locality, the “spooky action at a distance” decried by Einstein, Rosen and Podolsky. The experimental verification was confirmed by Alain Aspect and others (1982).
Fig 71c:
Cancellation of off-diagonal entangled components in decoherence by damping,
modelling extraneous collisions (Zurek 2003).
Other notions of collapse (see King 2020b for details) involve interaction with third-party quanta and the world on classical scales. All forms of quantum entanglement (Aspect et al. 1982), or its broader phase generalisation, quantum discord (Ollivier & Zurek 2002) involve decoherence (Zurek 1991, 2003), because the system has become coupled to other wave-particles. But these just correspond to further entanglements, not collapse. Recoherence (Bouchard et al. 2015) can reverse decoherence, supporting the notion that all non-conscious physical structures can exist in superpositions. Another notion is quantum darwinism (Zurek 2009), in which some states survive because they are especially robust in the face of decoherence. Spontaneous collapse (Ghirardi, Rimini, & Weber 1986) has a similar artificiality to Zurek’s original decoherence model, in that both include an extra factor in the Schrödinger equations forcing collapse.
Other notions of collapse (see King 2020b for details) involve interaction with third-party quanta and the world on classical scales. All forms of quantum entanglement (Aspect et al. 1982), or its broader phase generalisation, quantum discord (Ollivier & Zurek 2002) involve decoherence (Zurek 1991, 2003), because the system has become coupled to other wave-particles. But these just correspond to further entanglements, not collapse. Recoherence (Bouchard et al. 2015) can reverse decoherence, supporting the notion that all non-conscious physical structures can exist in superpositions. Another notion is quantum darwinism (Zurek 2009), in which some states survive because they are especially robust in the face of decoherence.
Penrose's objective-collapse theory, postulates the existence of an objective threshold governing the collapse of quantum-states, related to the difference of the spacetime curvature of these states in the universe's fine-scale structure. He suggested that at the Planck scale, curved spacetime is not continuous, but discrete and that each separated quantum superposition has its own piece of spacetime curvature, a blister in spacetime. Penrose suggests that gravity exerts a force on these spacetime blisters, which become unstable above the Planck scale of and collapse to just one of the possible states. Atomic-level superpositions would require 10 million years to reach OR threshold, while an isolated 1 kilogram object would reach OR threshold in 10−37s. Objects somewhere between these two scales could collapse on a timescale relevant to neural processing. An essential feature of Penrose's theory is that the choice of states when objective reduction occurs is selected neither randomly nor algorithmically. Rather, states are selected by a "non-computable" influence embedded in the Planck scale of spacetime geometry, which in "The Emperor's New Mind" (Penrose 1989) he associated with conscious human reasoning.
Spontaneous random collapse models GRW (Ghirardi, Rimini, & Weber 1986) include an extra factor complementing the Schrödinger equation forcing random collapse over a finite time. Both Penrose’s gravitationally induced collapse and the variants of GRW theories such as continuous spontaneous localisation (CSL) involving gradual, continuous collapse rather than a sudden jump have recently been partially eliminated by experiments derived from neutrino research which have failed to detect the very faint x-ray signals the local jitter of physical collapse models imply.
In the approach of SED (de la Peña et al. 2020), the stochastic aspect corresponds to the effects of the collapse process into the classical limit, but here consciousness can be represented by the zero point field (ZPF) (Keppler 2018). Finally we have pilot waves [27] (Bohm 1952), which identify particles as having real positions, thus not requiring wave function collapse, but have problems with handling creation of new particles. Images of such trajectories can be seen in weak quantum measurement and surreal Bohmian trajectories in fig 57.
David Albert (1992), in "Quantum Mechanics and Experience", cites objections to virtually all descriptions of collapse of the wave function. In terms of von Neumann's original definition, which allowed for collapse to take place any point from the initial event to the conscious observation of it, what he concluded was that there must be two fundamental laws about how the states of quantum-mechanical systems evolve:
Without measurements all physical systems invariably evolve in accordance with the dynamical equations of motion, but when there are measurements going on, the states of the measured systems evolve in accordance with the postulate of collapse. What these laws actually amount to will depend on the precise meaning of the word measurement. And it happens that the word measurement simply doesn't have any absolutely precise meaning in ordinary language; and it happens (moreover) that von Neumann didn't make any attempt to cook up a meaning for it, either.
However, if collapse always occurs at the last possible moment, as in Wigner's (1961) view:
All physical objects almost always evolve in strict accordance with the dynamical equations of motion. But every now and then, in the course of some such dynamical evolutions, the brain of a sentient being may enter a state wherein states connected with various different conscious experiences are superposed; and at such moments, the mind connected with that brain opens its inner eye, and gazes on that brain, and that causes the entire system (brain, measuring instrument, measured system, everything) to collapse, with the usual quantum-mechanical probabilities, onto one or another of those states; and then the eye closes, and everything proceeds again in accordance with the dynamical equations of motion.
We thus end up with either purely physical systems, which evolve in accordance with the dynamical equations of motion or conscious systems which do contain sentient observers. These systems evolve in accordance with the more complicated rules described above. ... So in order to know precisely how things physically behave, we need to know precisely what is conscious and what isn't. What this "theory" predicts will hinge on the precise meaning of the word conscious; and that word simply doesn't have any absolutely precise meaning in ordinary language; and Wigner didn't make any attempt to make up a meaning for it; and so all this doesn't end up amounting to a genuine physical theory either.
But he also discounts related theories relating to macroscopic processes:
All physical objects almost always evolve in strict accordance with the dynamical equations of motion. But every now and then, in the course of some such dynamical evolutions (in the course of measurements, for example), it comes to pass that two macroscopically different conditions of a certain system (two different orientations of a pointer, say) get superposed, and at that point, as a matter of fundamental physical law, the state of the entire system collapses, with the usual quantum-mechanical probabilities, onto one or another of those macroscopically different states. But then we again have two sorts of systems microscopic and macroscopic and again we don't precisely know what macroscopic is.
He even goes to the trouble of showing that no obvious empirical test can distinguish between such variations, including decoherence e.g. from air molecules, and with the GRW theory, where other problems arise about the nature and consequences of collapse on future evolution.
Tipler (2012, 2014), using quantum operators, shows that, in the many worlds interpretation, quantum non-locality ceases to exist because the first measurement of an entangled pair, e.g. spin up or down, splits the multiverse into two deterministic branches, in each of which the state of the the second particle is determined to be complementary in each multiverse branch, so no nonlocal "spooky action a a distance" needs, or can take place.
This also leads to a fully-deterministic multiverse:
Like the electrons, and like the measuring apparatus, we are also split when we read the result of the measurement, and once again our own split follows the initial electron entanglement. Thus quantum nonlocality does not exist. It is only an illusion caused by a refusal to apply quantum mechanics to the macroworld, in particular to ourselves.
Many-Worlds quantum mechanics, like classical mechanics is completely deterministic. So the observers have only the illusion of being free to chose the direction of spin measurement. However, we know my experience that there are universes of the mutilverse in which the spins are measured in the orthogonal directions, and indeed universes in which the pair of directions are at angles θ at many values between 0 and π/2 radians. To obtain the Bell Theorem quantum prediction in this more general case, where there will be a certain fraction with spin in one direction, and the remaining fraction in the other, requires using Everett’s assumption that the square of the modulus of the wave function measures the density of universes in the multiverse.
There is a fundamental problem with Tipler’s explanation. The observer is split into one that observes the cat alive and the other observes it dead. So everything is split. Nelson did and didn’t win the battle of Copenhagen by turning his blind eye, so Nelson is also both a live and dead Schrödinger cat. The same for every idiosyncratic conscious decision we make, so history never gets made. Free will ceases to exist and quantum measurement does not collapse the wave function. So we have a multiverse of multiverses with no history at all. Hence no future either.
This simply isn’t in any way how the real universe manifests. The cat IS alive or dead. The universe is superficially classical because so many wave functions have collapsed or are about to collapse that the quantum universe is in a dynamical state of creating superpositions and collapsing nearly all of them, as the course of history gets made. This edge of chaos dynamic between collapse and wave superposition allows free will to exist within the cubic centimetre of quantum uncertainty. We are alive. Subjective conscious experience is alive and history is being unfolded as I type.
Nevertheless the implications of the argument are quite profound in that both a fully quantum multiverse and a classical universe are causally deterministic systems, showing that the capacity of subjectively conscious free-will to throw a spanner in the works comes from the interface we experience between these two deterministic extremes.
Transactional Interpretations: Another key interpretation which extends the Feynman description to real particle exchanges is the transactional interpretation TI (Cramer 1986, King 1989, Kastner 2012, Cramer & Mead 2020) where real quanta are also described as a hand-shaking between retarded (usual time direction) and advanced (retrocausal) waves from the absorber, called “offer” and “confirmation” waves. TI arose from the Wheeler-Feynman (WF) time-symmetric theory of classical electrodynamics (Wheeler and Feynman 1945, 1949, Feynman 1965), which proposed that radiation is a time-symmetric process, in which a charge emits a field in the form of half-retarded, half-advanced solutions to the wave equation, and the response of absorbers combines with that primary field to create a radiative process that transfers energy from an emitter to an absorber.
Fig 72: (1) In TI a transaction is established by crossed phase advanced and retarded waves. (2) The superposition of these between the emitter and absorber results in a real quantum exchanged between emitter P and future absorber Q. (3) The origin of the positive energy arrow of time envisaged as a phase reflecting boundary at the cosmic origin (Cramer 1983). (4) Pair splitting entanglement can be explained by transactional handshaking at the common emitter. (5) The treatment of the quantum field in PTI is explained by assigning a different status to the internal virtual particle transactions (Kastner 2012). (6) A real energy emission in which time has broken symmetry involves multiple transactions between the emitter and many potential absorbers with collapse modelled as a symmetry breaking, in which the physical weight functions as the probability of that particular process as it ‘competes’ with other possible processes (Kastner 2014). (7) Space time emerging from a transaction (Kastner 2021a). (8) Entanglement experiment with time varying analysers (Aspect et al. 1982). A calcium atom emits two entangled photons with complementary polarisation each of which travels to one of two detectors oscillating so rapidly there is no time to send information at the speed of light between the two detector pairs. (9) The blue and yellow photon transitions. (10) The quantum correlations blue exceed Bell’s limits of communication between the two at the speed of light. The experiment is referred to as EPR after Einstein, Podolsky and Rosen who first suggested the problem of spooky action at a distance.
The only non-paradoxical way entanglement and its collapse can be realised physically, especially in the case of space-like separated detectors, as in fig 72(8) is this:
(A) The closer detector, say No. 1 destructively collapses the entanglement at (1) sending a non-entangled advanced confirmation wave back in time to the source.
(B) The arrival of the advanced wave at the source collapses the wave right at source, so that the retarded wave from the source is no longer entangled although it was prepared as entangled by the experimenter. This IS instantaneous but entirely local.
(C) The retarded offer wave from the Bell experiment is no longer actually entangled and is sent at light speed to detector 2 where if it is detected it immediately has complementary polarisation to 1.
(D) If detector 1 does not record a photon at the given angle no confirmation wave is sent back to the source, so no coincidence measurement can be made.
(E) The emitted retarded wave will remain entangled unless photon 1 is or has been absorbed by another atom but then no coincidence counts will be made either.
(F) The process is relativistically covariant. In an experimenter frame if relative motion results in detector 2 sampling first, the roles of 1 and 2 become exchanged and the same explanation follows.
Every detection at (2) either collapses the entangled wave, or the already partially collapsed single particle wave function as in (B): If no detection has happened at 1, or anywhere else, the retarded source wave is still entangled, and detector 2 may sample it and collapse the entanglement. If a detection of photon 1 has happened elsewhere or at detector 1 the retarded source wave is no longer entangled, as in B above and then detector 2, if it samples photon 2, also collapses this non-entangled single particle wave function.
So there is no light-speed violating paradox but there is a deeper paradox about advanced and retarded waves in space time in the transactional principle. This as far as I can see gives the complete true real time account of how the universe actually deals with entanglement, not the fully collapsed statistical result the experimenter sees, and figures the case is already closed.
The standard account of the Bell theorem experiment, as in Fig 72(8) cannot explain how the universe actually does it, only that the statistical correlation agrees with the sinusoidal angular dependence of quantum reality and violates the Bell inequality. The experimenter is in a privileged position to overview the total data and can conclude this with no understanding of how an entangled wave function they prepared can arrive at detector 2 unentangled when photon 1 has already been absorbed.
Richard Feynman's (1965) Nobel Lecture "The Development of the Space-Time View of Quantum Electrodynamics" opened the whole transactional idea of advanced and retarded waves twenty years before Cramer (1983) did. It enshrines the very principle before QED got completed as the most accurate theory ever.
The same applies to single particle wave functions, where collapse of the wave function on absorption has to paradoxicially result in a sudden collapse of the wave function to zero even at space-like intervals from the emission and absorption loci, but the advanced and retarded confirmation and offer waves. Quantum mechanics also allows events to happen with no definite causal order (Goswami et al. 2018).
As just noted, the process of wave function collapse has generally been considered to violate Lorenz relativistic invariance (Barrett 1999 p44-45):
The standard collapse theory, at least, really is incompatible with the theory of relativity in a perfectly straightforward way: the collapse dynamics is not Lorentz- covariant. When one finds an electron, for example, its wave function instantaneously goes to zero everywhere except where one found it. If this did not happen, then there would be a nonzero probability of finding the electron in two places at the same time in the measurement frame. The problem is that we cannot describe this process of the wave function going to zero almost everywhere simultaneously in a way that is compatible with relativity. In relativity there is a different standard of simultaneity for each inertial frame, but if one chooses a particular inertial frame in order to describe the collapse of the wave function, then one violates the requirement that all physical processes must be described in a frame-independent way.
Ruth Kastner (2021a,b) elucidates the relativistic transactional interpretation, which claims to resolve this through causal sets (Sorkin 2003) invoking a special-relativistic theory encompassing both real particle exchange and collapse:
In formal terms, a causal set C is a finite, partially ordered set whose elements are subject to a binary relation ≺ that can be understood as precedence; the element on the left precedes that on the right. It has the following properties:
(i) transitivity: (∀x, y, z ∈ C)(x ≺ y ≺ z ⇒ x ≺ z)
(ii)
irreflexivity: (∀x ∈ C)(x ~≺ x)
(iii)
local finiteness: (∀x, z ∈ C)
(cardinality { y ∈ C | x ≺ y ≺ z } < ∞)
Properties (i) and (ii) assure that the set is acyclic, while (iii) assures that the set is discrete. These properties yield a directed structure that corresponds well to temporal becoming, which Sorkin describes as follows:
In Sorkin’s construct, one can then have a totally ordered subset of connected links (as defined above), constituting a chain. In the transactional process, we naturally get a parent/child relationship with every transaction, which defines a link. Each actualized transaction establishes three things: the emission event E, the absorption event A, and the invariant interval I(E,A) between them, which is defined by the transferred photon. Thus, the interval I(E,A) corresponds to a link. Since it is a photon that is transferred, every actualized transaction establishes a null interval, i.e., ds2 = c2t2 − r2 = 0 . The emission event E is the parent of the absorption event A (and A is the child of E).
A major advantage of the causal set approach as proposed by Sorkin and collaborators … is that it provides a fully covariant model of a growing spacetime. It is thus a counterexample to the usual claim (mentioned in the previous section) that a growing spacetime must violate Lorentz covariance. Specifically, Sorkin shows that if the events are added in a Poissonian manner, then no preferred frame emerges, and covariance is preserved (Sorkin 2003, p. 9). In RTI, events are naturally added in a Poissonian manner, because transactions are fundamentally governed by decay rates (Kastner and Cramer, 2018).
Ruth Kastner comments in private communication in relation to her development of the transactional interpretation:
The main problem with the standard formulation of QM is that consciousness is brought in as a kind of 'band-aid' that does not really work to resolve the Schrodinger's Cat and Wigner's Friend of paradoxes. The transactional picture, by way of its natural non-unitarity (collapse under well-quantified circumstances), resolves this problem and allows room for consciousness to play a role as the acausal/volitional influence that corresponds to efficacy (Kastner 2016). My version of TI, however, is ontologically different from Cramer’s and it also is fully relativistic (Kastner 2021a,b). For specifics on why many recent antirealist claims about the world as alleged implications of Wigner's Friend are not sustainable, see Kastner (2021c). In particular, standard decoherence does not yield measurement outcomes, so one really needs real non-unitarity in order to have correspondence with experience. I have also shown that the standard QM formulation, lacking real non-unitarity, is subject to fatal inconsistencies (Kastner 2019, 2021d). These inconsistencies appear to infect Everettian approaches as well.
Kastner (2011) explains the arrow of time as a foundational quantum symmetry-breaking:
Since the direction of positive energy transfer dictates the direction of change (the emitter loses energy and the absorber gains energy), and time is precisely the domain of change (or at least the construct we use to record our experience of change), it is the broken symmetry with respect to energy propagation that establishes the directionality or anisotropy of time. The reason for the ‘arrow of time’ is that the symmetry of physical law must be broken: ‘the actual breaks the symmetry of the potential.’ It is often viewed as a mystery that there are irreversible physical processes and that radiation diverges toward the future. The view presented herein is that, on the contrary, it would be more surprising if physical processes were reversible, because along with that reversibility we would have time-symmetric (isotropic) processes, which would fail to transfer energy, preclude change, and therefore render the whole notion of time meaningless.
Kastner (2012, 2014b) sets out the basis for extending the possibilist transactional interpretation or PTI, to the relativistic domain in relativistic transactional interpretation or RTI. This modified version proposes that offer and confirmation waves (OW and CW) exist in a sub-empirical, pre-spacetime realm (PST) of possibilities, and that it is actualised transactions which establish empirical spatiotemporal events. PTI proposes a growing universe picture, in which actualised transactions are the processes by which spacetime events are created from a substratum of quantum possibilities. The latter are taken as the entities described by quantum states (and their advanced confirmations); and, at a subtler relativistic level, the virtual quanta. PTI proposes a growing universe picture, in which actualised transactions are the processes by which spacetime events are created from a substratum of quantum possibilities.
The basic idea is that offers and confirmations are spontaneously elevated forms of virtual quanta, where the probability of elevation is given by the decay rate for the process in question. In the direct action picture of PTI, an excited atom decays because one of the virtual photon exchanges ongoing between the excited electron and an external absorber (e.g. electron in a ground state atom) is spontaneously transformed into a photon offer wave that generates a confirming response. The probability for this occurrence is the product of the QED coupling constant α and the associated transition probability. In quantum field theory terms, the offer wave corresponds to a ‘free photon’ or excited state of the field, instantiating a Fock space state (Kastner 2014b).
In contrast, with standard QFT where the amplitudes over all interactions are added and then squared under the Born rule, according to PTI , the absorption of the offer wave generates a confirmation (the ‘response of the absorber’), an advanced field. This field can be consistently reinterpreted as a retarded field from the vantage point of an ‘observer’ composed of positive energy and experiencing events in a forward temporal direction. The product of the offer (represented by the amplitude) and the confirmation (represented by the amplitude’s complex conjugate) corresponds to the Born Rule.
Kastner (2014a, 2021c,d) deconstructs decoherence as well as quantum Darwinism, refuting claims that the emergence of classicality proceeds in an observer-independent manner in a unitary-only dynamics, noting that quantum Darwinism holds that the emergence of classicality is not dependent on any inputs from observers, but that it is the classical experiences of those observers that the decoherence program seeks to explain from first principles:
“in the Everettian picture, everything is always coherently entangled, so pure states must be viewed as a fiction -- but that means that it is also fiction that the putative 'environmental systems' are all randomly phased. In helping themselves to this phase randomness, Everettian decoherentists have effectively assumed what they are trying to prove: macroscopic classicality only ‘emerges’ in this picture because a classical, non-quantum-correlated environment was illegitimately put in by hand from the beginning. Without that unjustified presupposition, there would be no vanishing of the off-diagonal terms”
She extends this to an uncanny observation concerning the Everett view:
"That is, MWI does not explain why Schrodinger’s Cat is to be viewed as ‘alive’ in one world and ‘dead’ in another, as opposed to ‘alive + dead’ in one world and ‘alive – dead’ in the other.”
Kastner (2016a) notes that the symmetry-breaking of the advanced waves provides an alternative explanation to von Neumann’s citing of the consciousness of the observer in quantum measurement:
Von Neumann noted that this Process 1 transformation is acausal, nonunitary, and irreversible, yet he was unable to explain it in physical terms. He himself spoke of this transition as dependent on an observing consciousness. However, one need not view the measurement process as observer-dependent. … The process of collapse precipitated in this way by incipient transactions [competing probability projection operator weightings of the] absorber response(s) can be understood as a form of spontaneous symmetry breaking.
Kastner & Cramer (2018) confirm this picture:
And since not all competing possibilities can be actualized, symmetry must be broken at the spacetime level of actualized events. The latter is the physical correlate of non-unitary quantum state reduction.
However, in Kastner (2016b), Ruth considers observer participation as integral, rejecting two specific critiques of libertarian, agent-causal free will: (i) that it must be anomic or “antiscientific”; and (ii) that it must be causally detached from the choosing agent. She asserts that notwithstanding the Born rule, quantum theory may constitute precisely the sort of theory required for a nomic grounding of libertarian free will.
Kastner cites Freeman Dyson’s comment rejecting epiphenomenalism:
I think our consciousness is not just a passive epiphenomenon carried along by the chemical events in our brains, but is an active agent forcing the molecular complexes to make choices between one quantum state and another. In other words, mind is already inherent in every electron, and the processes of human consciousness differ only in degree but not in kind from the processes of choice between quantum states which we call ”chance” when they are made by electrons.”
Kastner then proposes, not just a panpsychic quantum reality but a pan-volitional basis for it:
Considering the elementary constituents of matter as imbued with even the minutest propensity for volition would, at least in principle, allow the possibility of a natural emergence of increasingly efficacious agent volition as the organisms composed by them became more complex, culminating in a human being. And allowing for volitional causal agency to enter, in principle, at the quantum level would resolve a very puzzling aspect of the indeterminacy of the quantum laws–the seeming violation of Curie’s Principle in which an outcome occurs for no reason at all. This suggests that, rather than bearing against free will, the quantum laws could be the ideal nomic setting for agent-causal free will.
Kastner, Kauffman & Epperson (2018) formalise the relationship between potentialities and actualities into a modification of Descartes res cogitans (purely mental substance) and res extensa (material substance) to res potentiae and res extensa comprising the potential and actual aspects of ontological reality. Unlike Cartesian dualism these are not separable or distinct but are manifest in all situations where the potential becomes actual, particularly in the process of quantum measurement in PTI, citing (McMullin 1984) on the limits of imagination of the res potentiae:
… imaginability must not be made the test for ontology. The realist claim is that the scientist is discovering the structures of the world; it is not required in addition that these structures be imaginable in the categories of the macroworld.
They justify this by noting that human evolutionary survival has depended on dealing with the actual, so the potential may not be imaginable in our conscious frame of reference, however one can note that the strong current of animism in human cultural history suggests a strong degree of focus on the potential, and its capacity to become actual in hidden unpredictable sources of accident or misfortune. In addition to just such unexpected real world examples, they they note the applicability of this to a multiplicity of quantum phenomena:
Thus, we propose that quantum mechanics evinces a reality that entails both actualities (res extensa) and potentia (res potentia), wherein the latter are as ontologically significant as the former, and not merely an epistemic abstraction as in classical mechanics. On this proposal, quantum mechanics IS about what exists in the world; but what exists comprises both possibles and actuals. Thus, while John Bell’s insistence on “beables” as opposed to just “observables” constituted a laudable return to realism about quantum theory in the face of growing instrumentalism, he too fell into the default actualism assumption; i.e., he assumed that to ‘be’ meant ‘to be actual,’ so that his ‘beables’ were assumed to be actual but unknown hidden variables.
What the EPR experiments reveal is that while there is, indeed, no measurable nonlocal, efficient causal influence between A and B, there is a measurable, nonlocal probability conditionalization between A and B that always takes the form of an asymmetrical internal relation. For example, given the outcome at A, the outcome at B is internally related to that outcome. This is manifest as a probability conditionalization of the potential outcomes at B by the actual outcome at A.
Nonlocal correlations such as those of the EPR entanglement experiment below can thus be understood as a natural, mutually constrained relationship between the kinds of spacetime actualities that can result from a given possibility – which itself is not a spacetime entity. She quotes Anton Zellinger (2016):
..it appears that on the level of measurements of properties of members of an entangled ensemble, quantum physics is oblivious to space and time .
Kastner (2021b), considers how the spacetime manifold emerges from a quantum substratum through the transactional process (fig 72(6)), in which spacetime events and their connections are established. The usual notion of a background spacetime is replaced by the quantum substratum, comprising quantum systems with non-vanishing rest mass, corresponding to internal periodicities that function as internal clocks defining proper times and in turn, inertial frames that are not themselves aspects of the spacetime manifold.
Three years after John Cramer published the transactional interpretation, I wrote a highly speculative paper, “Dual-time Supercausality (King 1989, Vannini 2006), based on John’s description which says many of the same things emergent in Ruth Kastner’s far more comprehensive development. Summing up the main conclusions we have:
(1) Symmetric-Time: This mode of action of time involves a mutual space-time relationship between emitter and absorber. Symmetric-time determines which, out of the ensemble of possibilities predicted by the probability interpretation of quantum mechanics is the actual one chosen. Such a description forms a type of hidden-variable theory explaining the selection of unique reduction events from the probability distribution. We will call this bi-directional causality transcausality.
(2) Directed-time: Real quantum interaction is dominated by retarded-time, positive-energy particles. The selection of temporal direction is a consequence of symmetry-breaking, resulting from energy polarization, rather than time being an independent parameter. The causal effects of multi-particle ensembles result from this dominance of retarded radiation, as an aspect of symmetry-breaking.
Dual-time is thus a theory of the interaction of two temporal modes, one time-symmetric which selects unique events from ensembles, and the other time-directed which governs the consistent retarded action of the ensembles. These are not contradictory. Each on their own form an incomplete description. Temporal causality is the macroscopic approximation of this dual theory under the correspondence principle. The probability interpretation governs the incompleteness of directed-causality to specify unique evolution in terms of initial conditions.
Quantum-consciousness has two complementary attributes, sentience and intent:
(a) Sentience represents the capacity to utilise the information in the advanced absorber waves and is implicitly transcausal in its basis. Because the advanced components of symmetric-time cannot be causally defined in terms of directed-time, sentience is complementary to physically-defined constraints.
(b) Intent represents the capacity to determine a unique outcome from the collection of such absorber waves, and represents the selection of one of many potential histories. Intent addresses the two issues of free-will and the principle of choice in one answer – free-will necessarily involves the capacity to select one out of many contingent histories and the principle of choice manifests the essential nature of free-will at the physical level.
The transactional interpretation presents a unique view of cosmology, involving an implicit space-time anticipation in which a real exchange, e.g. a photon emitted by a light bulb and absorbed on a photographic plate or elsewhere, or a Bell type entanglement experiment with two detectors, is split into an offer wave from the emitter and retro-causal confirmation waves from the prospective absorbers that, after the transaction is completed, interfere to form the real photon confined between the emission and absorption vertices. We also experience these retro-causal effects in weak quantum measurement, and delayed choice experiments.
To get a full picture of this process, we need to consider the electromagnetic field as a whole, in which these same absorbers are also receiving offer waves form other emitters, so we get a network of virtual emitter-absorber pairs.
There is a fundamental symmetry between creation and annihilation, but there is a sting in the measurement tail. When we do an interference experiment, with real positive energy photons, we know each photon came from the small region within the light source, but the locations of the potential absorbers affected by the wave function are spread across the world at large. The photon could be absorbed anywhere on the photographic plate, or before it, if it hits dust in the apparatus, or after if it goes right through the plate and out of the apparatus altogether, just as radioactive particles escape the exponential potential barrier of the nucleus. The problem concerning wave function collapse is which absorber?
In all these cases once a potential absorber becomes real, all the other potential absorbers have zero probability of absorption, so the change occurs instantaneously across space-time to other prospective absorbers, relative to the successful one. This is the root problem of quantum measurement. Special relativistic quantum field theory is time symmetric, so solving wave function collapse is thus most closely realised in the transactional interpretation, where the real wave function is neither the emitter's spreading linear retarded wave, nor any of the prospective absorbers’ linear advanced waves, but the results of a phase transition, in which all these hypothetical offer and confirmation waves resolve into one or more real wave functions linking creation and annihilation vertices. It is the nature of this phase transition and its non-linearity which holds the keys to life the universe and everything and potentially the nature of time itself.
Fig
73: A transaction modelled by a phase transition from a virtual plasma to a real interactive solid spanning space-time, in which the wave functions have now become like the harmonic phonons of solid state physics.
I remain intrigued by the transactional principle because I am convinced that subjective consciousness is a successful form of quantum prediction in space-time that has enabled single-celled eucaryotes to conquer the biosphere before there were brains, which have evolved based on intimately-coupled societies of such cells (neurons and neuroglia) now forming the neural networks neuroscience tries to understand in classical causal terms.
The eucaryote endo-symbiosis in this view marks a unique discrete topological transformation of the membrane to unfold attentive sentient consciousness invoking the second stage of cosmological becoming that ends up being us wondering what the hell is going on here? This is the foundation of emergence as quantum cosmology and it explains why we have the confounding existential dilemma we do have and why it all comes back to biospheric symbiosis being the centre of the cyclone of survival for us as a climax species.
The full picture of a transaction process is a population of real, or potential emitters in excited states and potential ground state absorbers, with their offer and confirmation wave functions extending throughout space time, as in the Feynman representation. As the transaction proceeds, this network undergoes a phase transition from a “virtual plasma” state to a “real solid”, in which the excited emitters are all paired with actual absorbers in the emitters’ future at later points in space-time. This phase transition occurs across space-time– i.e. transcausally – covering both space-like and time-like intervals. It has many properties of a phase transition from plasma to solid, with a difference – the strongest interactions don’t win, except with a probability determined by the relative power of the emitter’s wave amplitudes at the prospective absorption event. This guarantees the transaction conforms to the emitter’s probability distribution and the absorber's one as well. If a prospective absorber has already interacted with another emitter, it will not appear in the transaction network at this space-time point, so ceases to be part of the collective transaction. Once this is the case, all other prospective absorbers of a given emitter scattered throughout space-time, both in the absorber’s past and future, immediately have zero probability of absorption from any of the emitters and no causal conflict, or time loop arises.
Here is the problem. The transition is laterally across the whole of space-time, not along the arrow of time in either direction, so cannot exist within space-time and really needs a dual time parameter. This is why my 1989 paper was entitled “dual-time super-causality”.
Now this doesn’t mean a transaction is just a random process. Rather, it is a kind of super-selection theory, in which the probability of absorption at an absorber conforms to the wave probability but the decision making process is spread between all the prospective absorbers distributed across space-time, not just an emitter-based random wave power normalised probability. The process is implicitly retro-causal in the same way weak quantum measurement and Wheeler’s delayed choice experiments are.
The fact that in the cat paradox experiment, we see only a live or dead cat and not a superposition doesn’t mean however, that conscious observers witness only a classical world view. There are plenty of real phenomena in which we do observe quantum superpositions, including quantum erasure and quantum recoherence, where entangled particles can be distinguished collapsing the entanglement, and then re-entangled. A laser consists of excited atoms above the ground state which can be triggered to coherently emit photons indistinguishably entangled in a superposition of in-phase states stimulated by a standing wave in the laser caught between pairs of reflecting mirrors, so we see the bright laser light and know it is a massive superimposed set of entangled photons.
In all forms of quantum entanglement experiment, when the state of one of the pair is detected, the informational outcome is “transmitted” instantaneously to the other detector so that the other particle’s state is definitively complementary, although the detectors can be separated by space-like as well as time-like intervals, although this transmission cannot be used to relay classical information. This again is explained by the transactional interpretation, because the confirmation wave of the first detector of the pair is transmitted retro-causally back to the source event where the splitting occurred and then causally out to the second detector where it now has obligately complementary spin or polarisation when detection occurs.
What the transactional interpretation does provide is a real collapse process in which the universe is neither stranded in an Everett probability multiverse, nor in a fully collapsed classical state, but can be anywhere in between, depending on which agents are dong the measuring in a given theory. Nor is collapse necessarily random and thus meaningless, but is a space-time spanning non-linear phase transition, involving bidirectional hand-shaking between past and future. The absorbers are all in an emitter’s future so there is a musical chairs dance happening in the future. And those candidates may also be absorbers of other emitters and so on, so one can’t determine the ultimate boundary conditions of this problem. Somehow the “collapse”, which we admit violates retarded causality, results in one future choice. This means that there is no prohibition on this being resolved by the future affecting the outcome because the actual choice has no relation to classical causality.
The only requirement is that individual observations are asymptotic to the Born probability interpretation modulated by the wave function power φ.φ*, but this could arise from a variety of complex trans-deterministic quasi-random processes, where multiple entanglements generate effective statistical noise, while having a basis in an explicit hidden variable theory. The reason for the Born asymptote could thus be simply that the non-linear phase transition of the transaction, like the cosmic wave function of the universe, potentially involves everything there is – the ultimate pseudo-random optimisation process concealing a predictive hidden variable theory.
It is also one in which subjective conscious volition and meaning can become manifest in cosmic evolution, in which the universe is in a state of dynamic ramification and collapse of quantum superpositions. The key point here is that subjective conscious volition needs to have an anticipatory property in its own right, independent of brain mechanisms like attention processes, or it will be neutral to natural selection, even if we do have free will, and would not have been selected for, all the way from founding eucaryotes to Homo sapiens. The transactional interpretation, by involving future absorbers in the collapse process, provides just such an anticipatory feature.
Roger Penrose notes:
Current quantum mechanics, in the way that it is used, is not a deterministic scheme, and probabilistic behaviour is taken to be an essential feature of its workings. Some would contend that such indeterminism is here to stay, whereas others argue that there must be underlying 'hidden variables' which may someday restore a fully deterministic underlying ontology. ... Personally, I do not insist on taking a stand on this issue, but I do not think it likely that pure randomness can be the answer. I feel that there must be something more subtle underlying it all.
It is one thing to have free will and it’s another to use free will for survival on the basis of (conscious) prediction, or anticipation. Our conscious brains are striving to be predictive to the extent that we are subject to flash-lag perceptual illusions where perceptual processes attempt, sometimes incorrectly, to predict the path of rapidly moving objects (Eagleman & Sejnowski 2000), so the question is pivotal. Anticipating future threats and opportunities is key to how we evolved as conscious organisms, and this is pivotal over short immediate time scales, like the snake’s or tiger’s strike which we survive. Anticipating reality in the present is precisely what subjective consciousness is here to do.
The hardest problem of consciousness is thus that, to be conserved by natural selection, subjective consciousness (a) has to be volitional i.e. affect the world physically to result in natural selection and (b) it has to be predictive as well. Free-will without predictivity is neutral to evolution, just like random behaviour, and it will not be selected for. If we are dealing with classical reality, we could claim this is merely a computational requirement, but why then do we have subjective experience at all? Why not just recursive predictive attention processes with no subjectivity?
Here is where the correspondence between sensitive dynamic instability at tipping points and quantum uncertainty comes into the picture. We know biology and particularly brain function is a dynamically unstable process, with sensitive instabilities that are fractal down to the quantum level of ion channels, enzyme molecules whose active sites are enhanced by quantum tunnelling and the quantum parallelism of molecular folding and interactive dynamics. We also know that the brain dynamics operating close to the edge of chaos is convergent to dynamic crisis during critical decision-making uncertainties that do not have an obvious computational, cognitive, or reasoned disposition. We also know at these points that the very processes of sensitivity on existing conditions and other processes, such as stochastic resonance, can allow effects at the micro level approaching quanta to affect the outcome of global brain states.
And those with any rational insight can see that, for both theoretical and experimental reasons, classical causal closure of the universe in the context of brain dynamics is an unachievable quest. Notwithstanding Libet’s attempt, there is no technological way to experimentally achieve verification that the brain is causally closed and it flies in the face of the fractal molecular nature of biological processes at the quantum level.
Nevertheless we can understand that subjective conscious volition cannot enter into causal conflict with brain processes which have already established an effective computational outcome, as we do when we reach a prevailing reasoned conclusion, so free will is effectively restricted to situations where the environmental circumstances are uncertain, or not effectively computable, or perceived consciously to be anything but certain.
This in turn means that the key role of free will is not applying it to rationally or emotionally foregone conclusions but to environmental and strategic uncertainties, especially involving other conscious agents whose outcomes become part of quantum uncertainty itself.
The natural conclusion is that conscious free will has been conserved by evolution because it provides an evolutionary advantage at anticipating root uncertainties in the quantum universe and only these, including environmental and contextual uncertainties which are themselves products of quantum uncertainty amplified by unstable processes in the molecular universe such as quantum kinetic billiards. This seems almost repugnantly counter-intuitive, because we tend to associate quantum uncertainty and the vagaries of fate with randomness, but this is no more scientifically established than causal closure of the universe in the context of brain function. All the major events of history that are not foregone conclusions, result from conscious free will applied to uncertainty, such as Nelson turning his bind eye to the telescope, in the eventually successful Battle of Copenhagen.
So the question remains that when we turn to the role of subjective consciousness volition in quantum uncertainty, this comes down to not just opening the box of Schrödinger’s cat, but to anticipating uncertain events more often than random chance would predict in real life situations.
That is where the transactional approach comes into its own, because, while the future at the time of casting the emission die is an indeterminate set of potential absorbers, the retro-causal information contained in the transaction is implicitly revealing which future absorbers are actually able to absorb the real emitted quantum and hence information about the real state of the future universe, not just its probabilities at emission. Therefore the transaction is carrying additional implicit “encoded” information about the actual future state of the universe and what its possibilities are that can be critical for survival in natural selection.
Although, like the "transmission" of a detection to the other detector in an entanglement experiment cannot be used to transfer classical information faster than the speed of light, the same will apply to quantum transactions, but this doesn't mean they are random or have no anticipatory value, just that they cannot be used for causal deduction.
Because the "holistic" nature of conscious awareness is an extension of the global unstable excitatory dynamics of individual eucaryote cells to brain dynamics, a key aspect of subjective consciousness may be that it becomes sensitive to the wave-particle properties of quantum transactions with the natural environment in the process of cellular quantum sentience, involving sensitivity to quantum modes, including photons, phonons and molecular orbital effects constituting cellular vision, audition and olfaction. Expanded into brain processes, this cellular quantum dynamics then becomes integral to the binding of consciousness into a coherent whole.
If we view neurodynamics as a fully quantum process, in the most exotic quantum material in the universe, in which the wave aspects consist of parallel excitation modes representing the competing possibilities of response to environmental uncertainties. If there is an open and shut case on logical, or tactical grounds, this mode will win out pretty much in the manner of Edelman's (1987) neural Darwinism or Dennett's (1991) multiple drafts. In terms of quantum evolution, the non-conscious processes form overlapping wave functions, proceeding according to deterministic Schrödinger solutions, (von Neumann type 2 processes), but in situations where subjective consciousness becomes critical to make an intuitive decision, the brain dynamic approaches an unstable tipping point, in which system uncertainty becomes pivotal (represented in instability of global states which are in turn sensitive to fractal scales of instability to the molecular level. Subjective consciousness then intervenes causing an intuitive decision through a (type 1 von Neumann) process of wave function collapse of the superimposed modes.
From the inside, this feels like and IS a choice of "free-will" aka subjective conscious volition over the physical universe. From the outside, this looks like collapse of an uncertain brain process to one of its eigenfunction states which then become apparent. There is a very deep mystery in this process because the physical process looks and remains uncertain and indeterminate, but from inside, in complete contradiction, it looks and feels like the exercise of intentional will determining future physical outcomes. So in a fundamental way it is like a Schrödinger cat experiment in which the cat survives more often than not, i.e. we survive. Now that is a really confounding issue at the very nub of what conscious existence is about and why SEC has the cosmological axiom of subjectivity to resolve it, because it is a fundamental cosmological paradox otherwise. So we end up with the ultimate paradox of consciousness how can we not only predict future outcomes that are quantum uncertain but capitalise on the ones that promote our survival, i.e. throw a live cat more often that chance would dictate!
This is the same dilemma that SEC addresses in primal subjectivity and is also in Cathy Reason's theorem … from the physical point of view causal closure of the brain is an undecidable proposition because we can't physically prove conscious will has physical effect, but neither can we prove causal closure of the (classical) universe. On the other hand, as Cathy's theorem intimates, conscious self certainty implies we know we changed the universe. Certainty of will as well as certainty of self. So the subjective perspective is certain and the objective perspective is undecidable. In exactly the same way, the cat paradox outcome is uncertain and can't be hijacked physically, but the autonomous intentional will used to tip the uncertain brain state has confidence of overall efficacy. This is the key to consciousness, free-will and survival in the jungle when cognition stops dead because of all the other conscious agents rustling in the grass and threatening to strike, which are uncomputable because they too are conscious! It's also the key to Psi, but in a more marginal way because it's trying to pass this ability back into the physical, where it drifts towards the probability interpretation. That's why I accept it, but don't abuse the siddhis by declaring them!
Consciousness is retained by evolution because it is a product of a Red Queen neurodynamic race between predators and prey in a similar way to the way sexuality has become a self-perpetuating genetic race between parasites and hosts by creating individual variation, thus avoiding boom and bust epidemics.
Cramer (2022) notes a possible verifiable source of advanced waves:
In the 1940s, young Richard Feynman and his PhD supervisor John Wheeler decided to take the advanced solution seriously and to use it to formulate a new electromagnetism, now called Wheeler-Feynman absorber theory (WF). WF assumes that an oscillating electric charge produces advanced and retarded waves with equal strengths. However, when the retarded wave is subsequently absorbed (in the future), a cancellation occurs that erases all traces of the advanced waves and their time-backward “advanced effects.” WF gives results and predictions identical to those of conventional electromagnetic theory. However, if future retarded-wave absorption is somehow incomplete, WF suggests that this absorption deficiency might produce experimentally observable advanced effects.
When Bajlo (2017) measurements on cold, clear, dry days, he made the observations as the Earth rotated and the antenna axis swept across the galactic center, where wave-absorption variations might occur, in a number of these measurements, he observed strong advanced signals (6.94 to 26.5 standard deviations above noise) that arrived at the downstream antenna a time 2D/c before the main transmitted pulse signal. Variations in the advanced-signal amplitude as the antenna axis swept across the galactic center were also observed. The amplitude was reduced up to 50% of off-center maximum when pointed directly at the galactic center (where more absorption is expected.) These results constitute a credible observation of advanced waves.
Fig
74: Wheeler (1983) delayed choice experiment shows that different forms of
measurement after light from a distant quasar has been gravitationally lensed
around an intervening galaxy can be determined to have passed one or the other
way around it or a superposition of both, depending on whether detection of one
or other particle, or an interference is made when it reaches Earth.
Superdeterminism: There is another interpretation of quantum reality called super-determinism (Hossenfelder & Palmer 2020), which has an intriguing relationship with retro-causality and still can admit free will, despite the seeming contradiction in the title. Bell's theorem assumes that the measurements performed at each detector can be chosen independently of each other and of the hidden variables that determine the measurement outcome: ρ(λ(a,b))=ρ(λ).
In a super-deterministic theory, this relation is not fulfilled ρ(λ(a,b))≠ρ(λ) because the hidden variables are correlated with the measurement settings. Since the choice of measurements and the hidden variable are predetermined, the results at one detector can depend on which measurement is done at the other without any need for information to travel faster than the speed of light. The assumption of statistical independence is sometimes referred to as the free choice or free will assumption, since its negation implies that human experimentalists are not free to choose which measurement to perform. But this is incorrect. What it depends on are the actual measurements made. For every possible pair of measurements a, b there is a predefined trajectory determined both by the particle emission and the measurement at the time absorption takes place. Thus in general the experimenter still has the free will to choose a, b or even to change the detector set up, as in the Wheeler delayed choice experiment in fig 74, and science proceeds as usual, but the outcome depends on the actual measurements made. In principle, super-determinism is untestable, as the correlations can be postulated to exist since the Big Bang, making the loophole impossible to eliminate. However this has an intimate relationship with the transactional interpretation and its implicit retro-causality, because it includes the absorbing conditions in the transaction, so the two are actually compatible.
However their “toy” superdeterministic hidden variable theory (Donadi & Hossenfelder 2022) uses “the master equation for one of the most common examples of decoherence – amplitude damping in a two-level system”. But decoherence is a theory in which an additional term is added to model the increasing probability of a quantum getting hit by another quantum and literally uses forced damping to suppress the entangled “off diagonal” components of the wave function matrix. But this is equating the stochastic nature of uncertainty with quantum billiards. But the root uncertainty relations are not simply sourced in interactive billiards, so it is assuming one source of claimed randomness for the more fundamental sort that manifests in the quantum vacuum.
Schreiber (1995) sums up the case for consciousness collapsing the wave function as follows:
“The rules of quantum mechanics are correct but there is only one system which may be treated with quantum mechanics, namely the entire material world. There exist external observers which cannot be treated within quantum mechanics, namely human (and perhaps animal) minds, which perform measurements on the brain causing wave function collapse.”
Henry Stapp’s (2001) comment is very pertinent to the cosmology I am propounding, because it implies the place where collapse occurs lies in the brain making quantum measurements of its own internal states:
“From the point of view of the mathematics of quantum theory it makes no sense to treat a measuring device as intrinsically different from the collection of atomic constituents that make it up. A device is just another part of the physical universe... Moreover, the conscious thoughts of a human observer ought to be causally connected most directly and immediately to what is happening in his brain, not to what is happening out at some measuring device... Our bodies and brains thus become ... parts of the quantum mechanically described physical universe. Treating the entire physical universe in this unified way provides a conceptually simple and logically coherent theoretical foundation... “
Quantum entanglement is another area where consciousness may have a critical role. Einstein, Podolsky and Rosen (1935) proposed a locally causal limitation on any hidden variable theories describing the situation when two particles were entangled coherently in a single wave function. For example an excited calcium atom, because of the two electrons in its outer shell, can emit two (yellow and blue) photons of complementary spin in a single transition from zero to zero spin outer shells. Bell’s (1966) theorem demonstrated a discrepancy between locally-causal theories, in which information between hidden sub-quantum variables could not be transferred faster than light. However, multiple experiments using Bell’s theorem have found the polarisations, or other quantum states of the particles, such as spin, are correlated in ways violating local causality which are not limited by the velocity of light (Aspect et al. 1982). This “spooky action at a distance” which Einstein disliked shows that the state of either particle remains indeterminate until we measure one of them, when the other’s state is the instantaneously determined to be complementary. This cannot however be used to send logical classical information faster than light, or backwards in time, but it indicates that the quantum universe is a highly entangled system in which potentially all particles in existence are involved.
In an experiment to test the influence of conscious perception on quantum entanglement (Radin, Bancel & Delorme 2021), explored psychophysical (mind-matter) interactions with quantum entangled photons. Entanglement correlation strength measured in real-time was presented via a graph or dynamic images displayed on a computer monitor or web browser. Participants were tasked with mentally influencing that metric, with particularly strong results observed in three studies conducted (p < 0.0002). Radin, Michel & Delorme (2016) also reported a 5.72 sigma (p = 1.05 × 10−8) deviation from a null effect in which participants focused their attention toward or away from a feedback signal linked in real time to the double-slit component of an interference pattern, suggesting consciousness affecting wave function collapse. For a review, see Milojevic & Elliot (2023). Radin (2023) has also reported 7.3 sigma beyond chance (p=1.4x10-13) deviations leaving little doubt that on average anomalous deviations in the random data emerged during events that attracted widespread attention, from a network of electronic random number generators located around the world that continuously recorded samples, used to explore a hypothesis that predicts the emergence of anomalous structure in randomness correlated with events that attract widespread human attention. Mossbridge et al. (2014) in a meta analysis have also cited an organic unconscious anticipatory response to potential existential crises they term predictive anticipatory activity, which is similar to conscious quantum anticipation, citing anticipative entanglement swapping experiments such as Ma et al. (2002).
Fig 75: (Above)
Delayed choice entanglement swapping, in which Victor is able to decide whether
Alice's and Bob's photons are entangled or not after they have already been
measured. (Ma et al. 2002). (Below) A photon is entangled with a photon that
has already died (been sampled) even though they never coexisted at any point
in time (Megidish2012).
Summing up the position of physicists in a survey of participants in a foundations of quantum mechanics gathering, Schlosshauer et al. (2013) found that, while only 6% of physicists present believed consciousness plays a distinguished physical role, a majority believed it has a fundamental, although not distinguished role in the application of the formalism. They noted in particular that “It is remarkable that more than 60% of respondents appear to believe that the observer is not a complex quantum system.” Indeed on all counts queried there were wide differences of opinion, including which version of quantum mechanics they supported. Since all of the approaches are currently consistent with the predictions of quantum mechanics, these ambiguous figures are not entirely surprising.
The tendency towards an implicitly classical view of causality is similar to that among neuroscientists, with an added belief in the irreducible nature of randomness, as opposed to a need for hidden variables supporting quantum entanglement, rejecting Einstein’s disclaimer “God does not play dice with the universe.” Belief in irreducible randomness means that the principal evidence for subjectivity in quanta – the idiosyncratic unpredictable nature of individual particle trajectories – is washed out in the bath water of irreducible randomness, converging to the wave amplitude on repetition, consistent with the correspondence principle, that the behaviour of systems described by the theory of quantum mechanics reproduces classical physics in the limit of large quantum numbers.
Non-IID interactions may preserve quantum reality In Born's (1920) correspondence principle, systems described by quantum mechanics are believed to reproduce classical physics in the limit of large quantum numbers – if measurements performed on macroscopic systems have limited resolution and cannot resolve individual microscopic particles, then the results behave classically – the coarse-graining principle (Kofler & Brukner 2007). Subsequently Navascués & Wunderlich (2010) proved that in situations covered by IID (independent and identically distributed measurements) in which each run of an experiment must be repeated under exactly the same conditions and independently of other runs, we arrive at macroscopic locality. Similarly, temporal quantum correlations reduce to classical correlations and quantum contextuality reduces to macroscopic non-contextuality (Henson & Sainz 2015).
However Gallego & Dakić (2021) have shown that, surprisingly, quantum correlations survive in the macroscopic limit if correlations are not IID distributed at the level of microscopic constituents and that the entire mathematical structure of quantum theory, including the superposition principle is preserved in the limit. This macroscopic quantum behaviour allows them to show that Bell nonlocality is visible in the macroscopic limit.
“The IID assumption is not natural when dealing with a large number of microscopic systems. Small quantum particles interact strongly and quantum correlations and entanglement are distributed everywhere. Given such a scenario, we revised existing calculations and were able to find complete quantum behavior at the macroscopic scale. This is completely against the correspondence principle, and the transition to classicality does not take place” (Borivoje Dakić).
“It is amazing to have quantum rules at the macroscopic scale. We just have to measure fluctuations, deviations from expected values, and we will see quantum phenomena in macroscopic systems. I believe this opens the door to new experiments and applications” (Miguel Gallego).
Their approach is described as follows:
In this respect, one important consequence of the correspondence principle is the concept of macroscopic locality (ML): Coarse-grained quantum correlations become local (in the sense of Bell) in the macroscopic limit. ML has been challenged in different circumstances, both theoretically and experimentally. However, as far as we know, nonlocality fades away under coarse graining when the number of particles N in the system goes to infinity. In a bipartite Bell-type experiment where the parties measure intensities with a resolution of the order of N1/2 or, equivalently, O(N1/2) coarse graining. Then, under the premise that particles are entangled only by independent and identically distributed pairs, Navascués & Wunderlich (2010) prove ML for quantum theory.
Fig 76: Macroscopic
Bell-Type experiment.
We generalize the concept of ML to any level of coarse graining α ∈ [0, 1], meaning that the intensities are measured with a resolution of the order of Nα. We drop the IID assumption, and we investigate the existence of a boundary between quantum (nonlocal) and classical (local) physics, identified by the minimum level of coarse graining α required to restore locality. To do this, we introduce the concept of macroscopic quantum behavior (MQB), demanding that the Hilbert space structure, such as the superposition principle, is preserved in the thermodynamic limit.
Conclusion: We have
introduced a generalized concept of macroscopic locality at any level of coarse
graining α ∈ [0, 1]. We have investigated the
existence of a critical value that marks the quantum-to-classical transition.
We have introduced the concept of MQB at level α of coarse graining, which
implies that the Hilbert space structure of quantum mechanics is preserved in
the thermodynamic limit. This facilitates the study of macroscopic quantum
correlations. By means of a particular MQB at α = 1/2, , we show that αc ≥ 1/2, as opposed to the IID case, for which αIID ≤ 1/2. An upper
bound on αc is, however, lacking in the general case. The
possibility that no such transition exists remains open, and perhaps there
exist systems for which ML is violated at α = 1.
This means for example, that in (a) neural system processing, where the quantum unstable context is continually evolving as a result of edge-of-chaos processing, and so repeated IID measurements are not made and (b) biological evolution, where a sequence of unique mutations become sequentially fixed by natural and sexual selection, which is also consciously mediated in eucaryote organisms, both inherit implicit quantum non-locality in their evolution.
John Eccles (1986) proposed a quantum theory involving psychon quasi-particles mediating uncertainty of synaptic transmission to complementary dendrons cylindrical bundles of neurons arranged vertically in the six outer layers or laminae of the cortex. Eccles proposed that each of the 40 million dendrons is linked with a mental unit, or "psychon", representing a unitary conscious experience. In willed actions and thought, psychons act on dendrons and, for a moment, increase the probability of the firing of selected neurons through quantum tunnelling effect in synaptic exocytosis, while in perception the reverse process takes place. This model has been elaborated by a number of researchers (Eccles 1990, 1994, Beck & Eccles 1992, Georgiev 2002, Hari 2008). The difficulty with the theory is that the psychons are then physical quasi-particles with integrative mental properties. So it’s a quasi-physical description that doesn’t manifest subjectivity except by its integrative physical properties.
The Quantum Measurement Problem May Contradict Objective Reality
In quantum theory, before collapse, the system is said to be in a superposition of two states, and this quantum state is described by the wave function, which evolves in time and space. This evolution is both deterministic and reversible: given an initial wave function, one can predict what it’ll be at some future time, and one can in principle run the evolution backward to recover the prior state. Measuring the wave function, however, causes it to collapse, mathematically speaking, such that the system in our example shows up as either heads or tails. It’s an irreversible, one-time-only and no one knows what defines the process or boundaries of measurement.
One model that preserves the absoluteness of the observed event — either heads or tails for all observers—is the GRW theory, where quantum systems exist in a superposition of states until the superposition spontaneously and randomly collapses, independent of an observer. Whatever the outcome—heads or tails in our example—it shall hold for all observers. But GRW, and the broader class of “spontaneous collapse” theories, run foul of a long-cherished physical principle: the preservation of information. By contrast, the “many worlds” interpretation of quantum mechanics allows for non-absoluteness of observed events, because the wave function branches into multiple contemporaneous realities, in which in one “world,” the system will come up heads, while in another, it’ll be tails.
Ormrod, Venkatesh and Barrett (2023, Ananthaswamy 2023) focus on perspectival theories that obey three properties:
(1) Bell nonlocality (B). Alice chooses her type of measurement freely and independently of Bob, and vice versa – of their own free will – an important assumption. Then, when they eventually compare notes, the duo will find that their measurement outcomes are correlated in a manner that implies the states of the two particles are inseparable: knowing the state of one tells you about the state of the other.
(2) The preservation of information (I). Quantum systems that show deterministic and reversible evolution satisfy this condition. If you are wearing a green sweater today, in an information-preserving theory, it should still be possible, in principle, 10 years hence to retrieve the colour of your sweater even if no one saw you wearing it.
(3) Local dynamics (L). If there exists a frame of reference in which two events appear simultaneous, then the regions of space are said to be “space-like separated.” Local dynamics implies that the transformation of a system that takes a set of input states and produces a set of output states in one of these regions cannot causally affect the transformation of a system in the other region any faster than the speed of light, and vice versa. Each subsystem undergoes its own transformation, and so does the entire system as a whole. If the dynamics are local, the transformation of the full system can be decomposed into transformations of its individual parts: the dynamics are said to be separable. In contrast, when two particles share a state that’s Bell nonlocal (that is, when two particles are entangled, per quantum theory), the state is said to be inseparable into the individual states of the two particles. If transformations behaved similarly, in that the global transformation could not be described in terms of the transformations of individual subsystems, then the whole system would be dynamically inseparable.
Fig 76b: A graphical summary of the theorems. Possibilistic Bell Nonlocality is Bell Nonlocality that arises not only at the level of probabilities, but at the level of possibilities.
Their work analyses how pespectival quantum theories are BINSC, and that NSC implies L, so BINSC is BIL. Such BIL theories are then required to handle a deceptively simple thought experiment. Imagine that Alice and Bob, each in their own lab, make a measurement on one of a pair of particles. Both Alice and Bob make one measurement each, and both do the exact same measurement. For example, they might both measure the spin of their particle in the up-down direction. Viewing Alice and Bob and their labs from the outside are Charlie and Daniela, respectively. In principle, Charlie and Daniela should be able to measure the spin of the same particles, say, in the left-right direction. In an information-preserving theory, this should be possible. Using this scenario, the team proved that the predictions of any BIL theory for the measurement outcomes of the four observers contradict the absoluteness of observed events. This leaves physicists at an unpalatable impasse: either accept the non-absoluteness of observed events or give up one of the assumptions of a BIL theory.
Ormrod says dynamical separability is “kind of an assumption of reductionism – you can explain the big stuff in terms of these little pieces.” Just like a Bell nonlocal state cannot be reduced to some constituent states, it may be that the dynamics of a system are similarly holistic, adding another kind of nonlocality to the universe. Importantly, giving it up doesn’t cause a theory to fall afoul of Einstein’s theories of relativity, much like physicists have argued that Bell nonlocality doesn’t require superluminal or nonlocal causal influences but merely nonseparable states. Ormrod, Venkatesh and Barrett note: “Perhaps the lesson of Bell is that the states of distant particles are inextricably linked, and the lesson of the new ... theorems is that their dynamics are too.” The assumptions used to prove the theorem don’t explicitly include an assumption about freedom of choice because no one is exercising such a choice. But if a theory is Bell nonlocal, it implicitly acknowledges the free will of the experimenters.
Fig 76c:
Above An experimental realisation of the Wigner' friend setup showing there is
no such thing as objective reality - quantum mechanics allows two observers to
experience different, conflicting realities.
Below the proof of principle experiment of Bong et al. (2020) demonstrating mutual
inconsistency of 'No-Superdeterminism', 'Locality' and 'Absoluteness of
Observed Events’.
An experimental realisation of non-absoluteness of observation has been devised (Proietti et al., 2019) as shown in fig 76c using quantum entanglement. The experiment involves two people observing a single photon that can exist in one of two alignments, but until the moment someone actually measures it to determine which, the photon is in a superposition. A scientist analyses the photon and determines its alignment. Another scientist, unaware of the first's measurement, is able to confirm that the photon - and thus the first scientist's measurement - still exists in a quantum superposition of possible outcomes. As a result, each scientist experiences a different reality - both "true" even though they disagree with each other. In a subsequent experiment, Bong et al. (2020) transform the thought experiment into a mathematical theorem that confirms the irreconcilable contradiction at the heart of the Wigner scenario. The team also tests the theorem with an experiment, using photons as proxies for the humans, accompanied by new forms of Bell's inequalities, by building on a scenario with two separated but entangled friends. The researchers prove that if quantum evolution is controllable on the scale of an observer, then one of (a) No-Superdeterminism — the assumption of 'freedom of choice' used in derivations of Bell inequalities - that the experimental settings can be chosen freely — uncorrelated with any relevant variables prior to that choice, (2) Locality or (3) Absoluteness of Observed Events — that every observed event exists absolutely, not relatively – must be false. Although the violation of Bell-type inequalities in such scenarios is not in general sufficient to demonstrate the contradiction between those three assumptions, new inequalities can be derived, in a theory-independent manner, that are violated by quantum correlations. This is demonstrated in a proof-of-principle experiment where a photon's path is deemed an observer. This new theorem places strictly stronger constraints on physical reality than Bell's theorem.
Self-Simulated Universe Another theory put forward by gravitational theorists (Irwin, Amaral & Chester 2020) also uses retrocausality to try to explain the ultimate questions: Why is there anything here at all? What primal state of existence could have possibly birthed all that matter, energy, and time, all that everything? and the way did consciousness arise—is it some fundamental proto-state of the universe itself, or an emergent phenomenon that’s purely neurochemical and material in nature?
Fig 77b:
Self-Simulated Universe: Humans are near the point of demarcation, where EC or
thinking matter emerges into the choice-sphere of the infinite set of
possibilities of thought, EC∞.
Beyond the human level, physics allows for larger and more powerful networks
that are also conscious. At some stage of the simulation run, a conscious EC system
emerges that is capable of acting as the substrate for the primitive spacetime
code, its initial conditions, as mathematical thought, and simulation run, as a
thought, to self-actualize itself. Linear time would not permit this logic, but
non-linear time does.
This approach attempts to answer both questions in a way that weds aspects of Nick Bostrom’s Simulation Argument with “timeless emergentism.” termed the “panpsychism self-simulation model,” that says the physical universe may be a “strange loop” that may self-generate new sub-realities in an almost infinite hierarchy of tiers in-laid with simulated realities of conscious experience. In other words, the universe is creating itself through thought, willing itself into existence on a perpetual loop that efficiently uses all mathematics and fundamental particles at its disposal. The universe, they say, was always here (timeless emergentism) and is like one grand thought that makes mini thoughts, called “code-steps or actions”, again sort of a Matryoshka doll.
David Chester comments:
“While many scientists presume materialism to be true, we believe that quantum physics may provide hints that our reality could be a mental construct. Recent advances in quantum gravity, like seeing spacetime emergent via a hologram, is also a touch that spacetime isn’t fundamental. this can be also compatible with ancient Hermetic and Indian philosophy. In a sense, the mental construct of reality creates spacetime to efficiently understand itself by creating a network of subconscious entities that may interact and explore the totality of possibilities.”
They modify the simulation hypothesis to a self-simulation hypothesis, where the physical universe, as a strange loop, is a mental self-simulation that might exist as one of a broad class of possible code-theoretic quantum gravity models of reality obeying the principle of efficient language axiom, and discuss implications of the self-simulation hypothesis such as an informational arrow of time.
The self-simulation hypothesis is built upon the following axioms:
1. Reality, as a strange loop, is a code-based self-simulation in the mind of a panpsychic universal consciousness that emerges from itself via the information of code-based mathematical thought or self-referential symbolism plus emergent non-self-referential thought. Accordingly, reality is made of information called thought.
2. Non-local spacetime and particles are secondary or emergent from this code, which is itself a pre-spacetime thought within a self-emergent mind.
3. The panconsciousness has freewill to choose the code and make syntactical choices. Emergent lower levels of consciousness also make choices through observation that influence the code syntax choices of the panconsciousness.
4. Principle of efficient language (Irwin 2019). The desire or decision of the panconscious reality is to generate as much meaning or information as possible for a minimal number of primitive thoughts, i.e., syntactical choices, which are mathematical operations at the pre-spacetime code level.
Fig 77c:
This emphasis on coding is problematic, as it is trying to assert a
consciousness-makes-reality loop through an apparently abstract coded
representation based on discrete computation-like processes, assuming an
"tit-from-bit" notion that reality is made from information, not just
described by it.
It from bit: Otherwise put, every it — every particle, every field of force, even the space-time continuum itself — derives its function, its meaning, its very existence entirely — even if in some contexts indirectly — from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. It from bit symbolizes the idea that every item of the physical world has at bottom — at a very deep bottom, in most instances — an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe (Wheeler 1990).
Schwartz, Stapp & Beauregard (2005) advance a quantum theory of conscious volition, in which attentive will can influence physical brain states using quantum principles, in particular von Neumann's process 1 or collapse of the wave function complementing process 2, the causal evolution of the Schrödinger wave function responsible of ongoing physical brain states. They cite specific cognitive processes leading the physical changes in the manner of ongoing brain function:
There is at least one type of information processing and manipulation that does not readily lend itself to explanations that assume that all final causes are subsumed within brain, or more generally, central nervous system mechanisms. The cases in question are those in which the conscious act of wilfully altering the mode by which experiential information is processed itself changes, in systematic ways, the cerebral mechanisms used. There is a growing recognition of the theoretical importance of applying experimental paradigms that use directed mental effort to produce systematic and predictable changes in brain function. ... Furthermore, an accelerating number of studies in the neuroimaging literature significantly support the thesis that, with appropriate training and effort, people can systematically alter neural circuitry associated with a variety of mental and physical states.
They point out that it is necessary in principle to advance to the quantum level to achieve an adequate theory of the neurophysiology of volitionally directed activity. The reason, essentially, is that classic physics is an approximation to the more accurate quantum theory, and that this classic approximation eliminates the causal efficacy of our conscious efforts that these experiments empirically manifest.
They explain how structural features of ion conductance channels critical to synaptic function entail that the classical approximation to quantum reality fails in principle to cover the dynamics of a human brain, so that quantum dynamics must be used. The principles of quantum theory must then link the quantum physical description of the subject’s brain to their stream of conscious experiences. The conscious choices by human agents thereby become injected non-trivially into the causal interpretation of neuroscience and neuropsychology experiments, through type 1 processes performing quantum measurement operations. This particularly applies to those experimental paradigms in which human subjects are required to perform decision-making or attention-focusing tasks that require conscious effort.
Conscious effort itself can, justifiably within science, be taken to be a primary variable whose complete causal origins may be untraceable in principle, but whose causal efficacy in the physical world can be explained on the basis of the laws of physics.
The mental act of clear-minded introspection and observation, variously known as mindfulness, mindful awareness, bare attention, the impartial spectator, etc., is a well-described psychological phenomenon with a long and distinguished history in the description of human mental states. ... In the conceived approach, the role played by the mind, when one is observing and modulating one’s own emotional states, is an intrinsically active and physically efficacious process in which mental action is affecting brain activity in a way concordant with the laws of physics.
They propose a neurobiological interpretation where calcium channels play a pivotal role in type 1 processes at the synaptic level:
At their narrowest points, calcium ion channels are less than a nanometre in diameter. This extreme smallness of the opening in the calcium ion channels has profound quantum mechanical implications. The narrowness of the channel restricts the lateral spatial dimension. Consequently, the lateral velocity is forced by the quantum uncertainty principle to become large. This causes the quantum cloud of possibilities associated with the calcium ion to fan out over an increasing area as it moves away from the tiny channel to the target region where the ion will be absorbed as a whole, or not absorbed at all, on some small triggering site. ... This spreading of this ion wave packet means that the ion may or may not be absorbed on the small triggering site. Accordingly, the contents of the vesicle may or may not be released. Consequently, the quantum state of the brain has a part in which the neurotransmitter is released and a part in which the neurotransmitter is not released. This quantum splitting occurs at every one of the trillions of nerve terminals. ... In fact, because of uncertainties on timings and locations, what is generated by the physical processes in the brain will be not a single discrete set of non-overlapping physical possibilities but rather a huge smear of classically conceived possibilities. Once the physical state of the brain has evolved into this huge smear of possibilities one must appeal to the quantum rules, and in particular to the effects of process 1, in order to connect the physically described world to the streams of consciousness of the observer/participants.
However, they note that this focus on the motions of calcium ions in nerve terminals is not meant to suggest that this particular effect is the only place where quantum effects enter into the brain process, or that the quantum process 1 acts locally at these sites. What is needed here is only the existence of some large quantum of effect.
A type 1 process beyond the local deterministic process 2 is required to pick out one experienced course of physical events from the smeared-out mass of possibilities generated by all of the alternative possible combinations of vesicle releases at all of the trillions of nerve terminals. This process brings in a choice that is not determined by any currently known law of nature, yet has a definite effect upon the brain of the chooser.
They single out the quantum zeno effect, in which rapid multiple measurements can act to freeze a quantum state and delay its evolution and cite James (1892 417): The essential achievement of the will, in short, when it is most ‘voluntary,’ is to attend to a difficult object and hold it fast before the mind. Effort of attention is thus the essential phenomenon of will. ... Consent to the idea’s undivided presence, this is effort’s sole achievement. Everywhere, then, the function of effort is the same: to keep affirming and adopting the thought which, if left to itself, would slip away." This coincides with the studies already cited on wilful control of the emotions to imply evidence of effect.
Much of the work on attention since James is summarized and analysed in Pashler (1998). He emphasizes that the empirical ‘findings of attention studies argue for a distinction between perceptual attentional limitations and more central limitations involved in thought and the planning of action. A striking difference that emerges from the experimental analysis is that the perceptual processes proceed essentially in parallel, whereas the post-perceptual processes of planning and executing actions form a single queue, is in line with the distinction between ‘passive’ and ‘active’ processes. A passive stream of essentially isolated process 1 events versus active processes involving effort-induced rapid sequences of process 1 events that can saturate a given capacity.
There is in principle, in the quantum model, an essential dynamic difference between the unconscious processing done by the Schrödinger evolution, which generates by a local process an expanding collection of classically conceivable experiential possibilities and the process associated with the sequence of conscious events that constitute the wilful selection of action. The former are not limited by the queuing effect, because process 2 simply develops all of the possibilities in parallel. Nor is the stream of essentially isolated passive process 1 events thus limited. It is the closely packed active process 1 events that can, in the von Neumann formulation, be limited by the queuing effect.
This quantum model accommodates naturally all of the complex structural features of the empirical data that he describes. Chapter 6 emphasizes a specific finding: strong empirical evidence for what he calls a central processing bottleneck associated with the attentive selection of a motor action. This kind of bottleneck is what the quantum-physics-based theory predicts: the bottleneck is precisely the single linear sequence of mind–brain quantum events that von Neumann quantum theory describes.
Hameroff and Penrose (2014) have also proposed a controversial theory that consciousness originates at the quantum level inside neurons, rather than the conventional view that it is a product of connections between neurons, coupling orchestrated objective reduction (OOR) to hypothetical quantum cellular automata in the microtubules of neurons. The theory is regarded as implausible by critics, both physicists and neuroscientists who consider it to be a poor model of brain physiology on multiple grounds.
Orchestration refers to the hypothetical process by which microtubule-associated proteins, influence or orchestrate qubit state reduction by modifying the spacetime-separation of their superimposed states. The latter is based on Penrose's objective-collapse theory for interpreting quantum mechanics.
The tubulin protein dimers of the microtubules have hydrophobic pockets that may contain delocalised π electrons. Hameroff claims that this is close enough for the tubulin π electrons to become quantum entangled. This would leave these quantum computations isolated inside neurons. Hameroff then proposed, although this idea was rejected by Reimers (2009), that coherent Frolich condensates in microtubules in one neuron can link with microtubule condensates in other neurons and glial cells via the gap junctions of electrical synapses claiming these are sufficiently small for quantum tunnelling across, allowing them to extend across a large area of the brain. He further postulated that the action of this large-scale quantum activity is the source of 40 Hz gamma waves, building upon the theory that gap junctions are related to the gamma oscillation.
Because of its dependence on Penrose’s idea of gravitational quantum collapse, the theory is confined to objective reduction, at face value crippling the role of free-will in conscious experience. However Hameroff (2012) attempts to skirt this by applying notions of retro-causality, as illustrated in fig 77(2), in which a dual-time approach (King 1989) is used to invoke a quantum of the present, the Conscious NOW. We will see that retrocausality is a process widely cited also in this work. Hameroff justifies such retrocausality from three sources.
Firstly he cites an open brain experiment of Libet. Peripheral stimulus, e.g., of the skin of the hand, resulted in an “EP” spike in the somatosensory cortical area for the hand ∼30ms after skin contact, consistent with the time required for a neuronal signal to travel from hand to spinal cord, thalamus, and brain. The stimulus also caused several 100 ms of ongoing cortical activity following the EP. Subjects reported conscious experience of the stimulus (using Libet’s rapidly moving clock) near-immediately, e.g., at the time of the EP at 30ms, hinting at retro-causality of the delayed “readiness potential”.
Secondly, he cites a number of well-controlled studies using electrodermal activity, fMRI and other methods to look for emotional responses, e.g., to viewing images presented at random times on a computer screen. Surprisingly, the changes occurred half a second to two seconds before the images appeared. They termed the effect pre-sentiment because the subjects were not consciously aware of the emotional feelings. Non-conscious emotional sentiment (i.e., feelings) appeared to be referred backward in time. Bem (2012, 2016) reported on studies showing statistically significant backward time effects, most involving non-conscious influence of future emotional effects (e.g., erotic or threatening stimuli) on cognitive choices. Studies by others have reported both replication, and failure to replicate, the controversial results. Thirdly he cites a number of delayed choice experiments widely discussed in this work.
Fig 77: (1) An axon terminal releases neurotransmitters through a synapse and are received by microtubules in a neuron's dendritic spine. (2) From left, a superposition develops over time, e.g., a particle separating from itself, shown as simultaneous curvatures in opposite directions. The magnitude of the separation is related to E, the gravitational self-energy. At a particular time t, E reaches threshold by E = h ̄/t, and spontaneous OR occurs, one particular curvature is selected. This OR event is accompanied by a moment of conscious experience (“NOW”), its intensity proportional to E. Each OR event also results in temporal non-locality, referring quantum information backward in classical time (curved arrows). (3,4) Scale dependent resonances from the pyramidal neuron, through microtubules, to π-orbitals and gravitational effects.
However none of these processes have been empirically verified and the complex tunnelling invoked is far from being a plausible neurophysiological process. The model requires that the quantum state of the brain has macroscopic quantum coherence, which needs to be maintained for around a tenth of a second. But, according to calculations made by Max Tegmark (2000), this property ought not to hold for more than about 10-13 s. Hameroff and co-workers (Hagen et al. 2002) have advanced reasons why this number should actually be of the order of a tenth of a second. But 12 orders of magnitude is a very big difference to explain away and serious doubts remain about whether the Penrose–Hameroff theory is technically viable. Two experiments (Lewton 2022, Tangerman 2022), presented at The Tucson Science of Consciousness conference merely showed that anaesthetics hastened delayed luminescence and that under laser excitation prolonged excitation diffused through microtubules further than expected when not under anaesthetics. There is no direct evidence for the cellular automata proposed and microtubules are critically involved in neuronal architecture, and are also involved in molecular transport, so functional conflict would result from adding another competing function. Hameroff (2022) cites processes, from the pyramidal neuron, down through microtubules, to pi-orbital resonances and gravitational space-time effects, but the linkage to microtubules is weak. Evidence for anaesthetic disruption of microtubules, Kelz & Mashour (2019), applies indiscriminately to all anaesthetics, from halothane to ketamine widely across the tree of life, from paramecium to humans, indicating merely that microtubular integrity is necessary for consciousness and does not indicate microtubules have a key role other than their essential architectural and transport roles.
OOR would force collapse, but it remains unestablished how conscious volition is invoked, because collapse is occurring objectively in terms of Penrose’s notion of space-time blisters. It remains unclear how these hypothetical objective or “platonic” entities, as Penrose puts it, relate to subjective consciousness or volition. Hameroff (2012) in “How quantum brain biology can rescue conscious free will” attempts an explanation, but this simply comes down to objective OOR control:
Orch OR directly addresses conscious causal agency. Each reduction/conscious moment selects particular microtubule states which regulate neuronal firings, and thus control conscious behavior. Regarding consciousness occurring “too late,” quantum state reductions seem to involve temporal non-locality, able to refer quantum information both forward and backward in what we perceive as time, enabling real-time conscious causal action. Quantum brain biology and Orch OR can thus rescue free will.
For this reason Symbiotic Existential Cosmology remains agnostic about such attempts to invoke unestablished, exotic quantum effects, and instead points to the non-IID nature of brain processes generally, meaning that neurodynamics is a fractal quantum process not required to be adiabatically isolated as decoherence limits of technological quantum computing suggest.
QBism and the Conscious Consensus Quantum Reality
QBism (von Bayer 2016) is an acronym for "quantum Bayesianism" a founding idea from which it has since moved on. It is a version of quantum physics founded on the conscious expectations of each physicist and their relationships with other physicists. According to QBism, experimental measurements of quantum phenomena do not quantify some feature of an independently existing natural structure. Instead, they are actions that produce experiences in the person or people doing the measurement.
“When I take an action on the world, something genuinely new comes out.”
This is very similar to the way Symbiotic Existential Cosmology presents consciousness as primary in the sense that we all experience subjective consciousness and infer the real world through the consensus view between conscious observers of our experiences of what we come to call the physical world. So although we know the physical world is necessary for our biological survival – the universe is necessary, we derive our knowledge of it exclusively and only through our conscious experiences of it.
The focus is on how to gain knowledge in a probabilistic universe... In this probabilistic interpretation, collapse of the quantum wave function has little to do with the object observed/measured. Rather, the crux of the matter is change in the knowledge of the observer based on new information acquired through the process of observing. Klaus Fuchs explains: “When a quantum state collapses, it’s not because anything is happening physically, it’s simply because this little piece of the world called a person has come across some knowledge, and he updates his knowledge… So the quantum state that’s being changed is just the person’s knowledge of the world, it’s not something existent in the world in and of itself.”
QBism is agnostic about whether there is a world that is structured independently of human thinking. It doesn’t assume we are measuring pre-existing structures, but nor does it pretend that quantum formalism is just a tool. Each measurement is a new event that guides us in formulating more accurate rules for what we will experience in future events. These rules are not just subjective, for they are openly discussed, compared and evaluated by other physicists. QBism therefore sees physicists as permanently connected with the world they are investigating. Physics, to them, is an open-ended exploration that proceeds by generating ever new laboratory experiences that lead to ever more successful, but revisable, expectations of what will be encountered in the future.
In QBism the wave function is no longer an aspect of physical reality as such, but a feature of how the observer's expectations will be changed by an act of quantum measurement.
The principal thesis of QBism is simply this: quantum probabilities are numerical measures of personal degrees of belief. According to QBism, experimental measurements of quantum phenomena do not quantify some feature of an independently existing natural structure. Instead, they are actions that produce experiences in the person or people doing the measurement.
In the conventional version of quantum theory, the immediate cause of the collapse is left entirely unexplained, or "miraculous" although sometimes assumed to be essentially random. QBism solves the problem as follows. In any experiment the calculated wave function furnishes the prior probabilities for empirical observations that may be made later. Once an observation has been made new information becomes available to the agent performing the experiment. With this information the agent updates their probability and their wave function, instantaneously and without magic.
So in the Wigner's friend experiment, the friend reads the counter while Wigner, with his back turned to the apparatus, waits until he knows that the experiment is over. The friend learns that the wave function has collapsed to the up outcome. Wigner, on the other hand, knows that a measurement has taken place but doesn’t know its result. The wave function he assigns is a superposition of two possible outcomes, as before, but he now associates each with a definite reading of the counter and with his friend’s knowledge of that reading — a knowledge that Wigner does not share. For the QBist there is no problem: Wigner and his friend are both right. Each assigns a wave function reflecting the information available to them, and since their respective compilations of information differ, their wave functions differ too. As soon as Wigner looks at the counter himself or hears the result from his friend, he updates his wave function with the new information, and the two will agree once more—on a collapsed wave function.
According to the conventional interpretation of quantum mechanics, in the Schrödinger's cat experiment, the value of a superimposed wave function is a blend of two states, not one or the other. What is the state of the cat after one half-life of the atom, provided you have not opened the box? The fates of the cat and the atom are intimately entangled. An intact atom implies a living cat; a decayed atom implies a dead cat. It seems to follow that since the atom’s wave function is unquestionably in a superposition so is the cat: it is both alive and dead. As soon as you open the box, the paradox evaporates: the cat is either alive or dead. But while the box is still closed — what are we to make of the weird claim that the cat is dead and alive at the same time? According to QBism, the state of an unobserved atom, or a cat, has no value at all. It merely represents an abstract mathematical formula that gives the odds for a future observation: 0 or 1, intact or decayed, dead or alive. Claiming that the cat is dead and alive is as senseless as claiming that the outcome of a coin toss is both heads and tails while the coin is still tumbling through the air. Probability theory summarises the state of the spinning coin by assigning a probability of 1/2 that it will be heads. So QBism refuses to describe the cat’s condition before the box is opened and rescues it from being described as hovering in a limbo of living death.
If the wave-function, as QBism maintains, says nothing about an atom or any other quantum mechanical object except for the odds for future experimental outcomes, the unperformed experiment of looking in the box before it is opened has no result at all, not even a speculative one. The bottom line: According to the QBist interpretation, the entangled wave-function of the atom and the cat does not imply that the cat is alive and dead. Instead, it tells an agent what she can reasonably expect to find when they open the box.
This makes QBism compatible with phenomenologists, for whom experience is always “intentional” – i.e. directed towards something – and these intentionalities can be fulfilled or unfulfilled. Phenomenologists ask questions such as: what kind of experience is laboratory experience? How does laboratory experience – in which physicists are trained to see instruments and measurements in a certain way – differ from, say, emotional or social or physical experiences? And how do lab experiences allow us to formulate rules that anticipate future lab experiences?
Another overlap between QBism and phenomenology concerns the nature of experiments. Experiments are performances. They’re events that we conceive, arrange, produce, set in motion and witness, yet we can’t make them show us anything we wish. That doesn’t mean there is a deeper reality “out there” – just as, with Shakespeare, there is no “deep Hamlet” of which all other Hamlets we produce are imitations. In physics as in drama, the truth is in performance.
However, there is one caveat. We simply don't know whether consciousness itself can be associated only with collapsed probabilities or in some way is also steeped even as a complement in the spooky world of entanglement, so reducing the entirety of physics to collapsed probabilities may not convey the entire picture and the degree to which conscious experiences correspond to unstable brain states at the edge of chaos making phase coherence measurements akin to or homologous with quantum measurements may mean this picture is vastly more complicated than meets the eye.
Complementing this description of the quantum world at large is the actual physics of how the brain processes information. By contrast with a digital computer, the brain uses both pulse coded action potentials and continuous gradients in an adaptive parallel network. Conscious states tend to be distinguished from subconscious processing by virtue of coherent phase fronts of the brain’s wave excitations. Phase coherence of beats between wave functions fig 71(c), is also the basis of quantum uncertainty.
In addition, the brain uses edge-of-chaos dynamics, involving the butterfly effect – arbitrary sensitivity to small fluctuations in bounding conditions – and the creation of strange attractors to modulate wave processing, so that the dynamics doesn’t become locked into a given ordered state and can thus explore the phase space of possibilities, before making a transition to a more ordered state representing the perceived solution. Self-organised criticality is also a feature, as is neuronal threshold tuning. Feedback between the phase of brain waves on the cortex and the discrete action potentials of individual pyramidal calls, in which the phase is used to determine the timing of action potentials, creates a feedback between the continuous and discrete aspects of neuronal excitation. These processes, in combination, may effectively invoke a state where the brain is operating as an edge-of-chaos quantum computer by making internal quantum measurements of its own unstable dynamical evolution, as cortical wave excitons, complemented by discrete action potentials at the axonal level.
Chaotic sensitivity, combined with related phenomena such as stochastic resonance, mean that fractal scale-traversing handshaking can occur between critically poised global brain states, neurons at threshold, ion-channels and the quantum scale, in which quantum entanglement of excitons can occur (King 2014). At the same time these processes underpin why there is ample room in physical brain processing for quantum uncertainty to become a significant factor in unstable brain dynamics, fulfilling Eccles (1986) notion that this can explain a role for consciousness, without violating any classically causal processes.
This means that brain function is an edge-of-chaos quantum dynamical system which, unlike a digital computer, is far from being a causally deterministic process which would physically lock out any role for conscious decision-making, but leaves open a wide scope for quantum uncertainty, consistent with a role for consciousness in tipping critical states. The key to the brain is thus its quantum physics, not just its chemistry and biology. This forms a descriptive overview of possible processes involved rather than an empirical proof, in the face of the failure of promissory materialistic neuroscience (Popper & Eccles 1984) to demonstrate physical causal closure of the universe in the context of brain function, so Occam’s razor cuts in the direction which avoids conflict with empirical experience of conscious volitional efficacy over the physical universe.
Fig 78: (1) Edge of chaos transitions model of olfaction (Freeman 1991). (2) Stochastic resonance as a hand-shaking process between the ion channel and whole brain states (Liljenström & Svedin 2005). (3) Hippocampal place maps (erdiklab.technion.ac.il). Hippocampal cells have also been shown to activate in response to desired locations in an animals anticipated future they have observed but not visited (Olafsdottir et al. 2015). (4) Illustration of micro-electrode recordings of local wave phase precession (LFP) enabling correct spatial and temporal encoding via discrete action potentials in the hippocampus (Qasim et al. 2021). (5) Living systems are dynamical systems. They show ensembles of eigenbehaviors, which can be seen as unstable dynamical tendencies in the trajectory of the system. Fransisco Varela’s neurophenomenology (Varela 1996, Rudrauf et al. (2003) is a valid attempt to bridge the hard and easy problems, through a biophysics of being, by developing a complementary subjective account of processes corresponding to objective brain processing. While these efforts help to elucidate the way brain states correspond to subjective experiences, using an understanding of resonant interlocking dynamical systems, they do not of themselves solve the subjective nature of the hard problem. (6) Joachim Keppler's (2018, 2021, James et al. 2022) view of conscious neural processing uses the framework of stochastic electrodynamics (SED), a branch of physics that affords a look behind the uncertainty of quantum field theory (QFT), to derive an explanation of the neural correlates of consciousness, based on the notion that all conceivable shades of phenomenal awareness are woven into the frequency spectrum of a universal background field, called zero-point field (ZPF), implying that the fundamental mechanism underlying conscious systems rests upon the access to information available in the ZPF. This gives an effective interface description of how dynamical brain states correspond to subjective conscious experiences, but like the other dynamical descriptions, does not solve the hard problem itself of why the zero point field becomes subjective.
Diverse Theories of Consciousness
Fig 79: Overview of
Theories of Consciousness reviewed, with rough comparable positions. Field or field-like
theories are in
blue/magenta. Explicit AI support magenta/red. Horizontal positions guided by specific author statements.
Section links: GNW, ART, DQF, ZPF, AST, CEMI, FEM, IIT, PEM, ORCH
Descriptions of the actively conscious brain revolve around extremely diverse conceptions. The neural network approach conceives of the brain as a network of neurons connected by axonal-dendritic synapses, with action potentials as discrete impulses travelling down the long pyramidal cell axons through which activity is encoded as a firing rate. In this view the notions of “brain waves” as evidenced in the EEG (electroencephalogram) and MEG (magnetoencephalogram) are just the collective averages of these spikes, having no function in themselves, being just an accessory low intensity electromagnetic cloud associated with neuronal activity, which happens to generate a degree of coupled synchronisation through the averaged excitations of the synaptic web. At the opposite extreme are field theories of the conscious brain in which fields have functional importance in themselves and help to explain the “binding” problem of how conscious experiences emerge from global brain dynamics.
Into the mix are also abstract theories of consciousness such as Tonioni and Koch’s (2015) IIT or integrated information theory and Graziano’s (2016) AST or attention schema theory, which attempt to formulate an abstract basis for consciousness that might arise in biological brains or synthetic neural networks given the right circumstances.
The mushroom experience that triggered Symbiotic Existential Cosmology caused a reversal of world view from my original point of view, King (1996), looking for the neurodynamic and quantum basis of consciousness in the brain, to realising that no such theory is possible because a pure physicalist theory cannot bridge the hard problem explanatory gap in the quantum universe, due to the inability to demonstrate causal closure.
No matter how fascinating and counter-intuitive the complexities of the quantum, physical and biological universe are, no purely physicalist description of the neurodynamics of consciousness can possibly succeed, because it is scientifically impossible to establish a theoretical proof, or empirical demonstration, of the causal closure of the physical universe in the context of neurodynamics. The bald facts are that, no matter to what degree we use techniques, from optogenetics, through EcoG, to direct cell recording, there is no hope within the indeterminacies of the quantum universe of making an experimental verification of classical causal closure. Causal closure of the physical universe thus amounts to a formally undecidable cosmological proposition from the physical point of view, which is heralded as a presumptive 'religious' affirmative belief without scientific evidence, particularly in neuroscience.
The hard problem of consciousness is thus cosmological, not biological, or neurodynamic alone. Symbiotic Existential Cosmology corrects this by a minimal extension of quantum cosmology by adding the axiom of primal subjectivity, as we shall see below.
In stark contrast to this, the subjective experiential viewpoint perceives conscious volition over the physical universe as an existential certainty that is necessary for survival. When any two live human agents engage in a frank exchange of experiences and communications, such as my reply to you all now, which evidences my drafting of a consciously considered opinion and intentionally sending it to you in physical form, this can be established beyond reasonable doubt by mutual affirmation of our capacity to consciously and intentionally respond with a physical communication. This is the way living conscious human beings have always viewed the universe throughout history and it is a correct veridical empirical experience and observation of existential reality, consistent with personal responsibility, criminal and civil law on intent, all long-standing cultural traditions and the fact that 100% of our knowledge of the physical world comes through our conscious experience of it. Neuroscientists thus contradict this direct empirical evidence at their peril.
However there is still a practical prospect of refining our empirical understanding of the part played by neurodynamics in generating subjective conscious experience and volition over the physical universe through current and upcoming techniques in neuroscience. What these can do is demonstrate experimentally the nature of the neurodynamics occurring, when conscious experiences are evoked, the so-called "neural correlate of consciousness", forming an interface with conscious experience our and ensuing decision-making actions.
To succeed at this scientific quest, we need to understand how quantum cosmology enters into the formation of biological tissues. The standard model of physics is symmetry broken, between the colour, weak, and EM forces and gravity, which ensures that there are a hundred positively charged atomic nuclei, with orbital electrons having both periodic quantum properties of the s, p, d, & f, orbitals and non-linear EM charge interactions, centred on first row covalent H-CNO modified by P & S and light ionic and transition elements, to form a fractal cooperative bonding cascade from organic molecules like the amino acids and nucleotides, through globular proteins and nucleic acids, to complexes like the ribosome, and membrane, to cell organelles, cells and tissues. These constitute an interactive quantum form of matter – the most exotic form of matter in existence, whose negentropic thermodynamics in living systems is vastly more challenging than the quantum properties of solid state physics and its various excitons and quasi-particles. Although these are now genetically and enzymatically encoded, the underlying fractal dynamics is a fundamental property of cosmological symmetry-breaking and abiogenesis. It introduces a cascade of quantum effects, in protein folding, allosteric active sites with tunnelling, membrane ionic and electron transport and ultimately neurodynamics. Furthermore biological processes are non IID, not constituting identical independently distributed quantum measurements, so do not converge to the classical description and remain collectively quantum in nature throughout, spanning all or most aspects of neuronal excitability and metabolism.
This means that current theories of the interface between CNS neurodynamics and subjective conscious volition are all manifestly incomplete and qualitatively and quantitatively inadequate to model or explain the brain-experience interface. Symbiotic Existential Cosmology has thus made a comprehensive review of these, including GNW (Dehane et al.), ART (Grossberg), DQF (Freeman & Vitiello), ZPF (Keppler), AST (Graziano), CEMI (McFadden), FEM (Solms & Friston), IIT (Tonioni & Koch), PEM (Poznanski et al.), as well as outliers like ORCH (Hameroff & Penrose). The situation facing TOEs of consciousness are, despite experimental progress, in a more parlous state than physical TOEs, from supersymmetric, superstring, and membrane theories to quantum loop gravity, that as yet show no signs of unification over multiple decades. In both fields, this requires a foundation rethink and a paradigm shift. Symbiotic Existential Cosmology provides this to both fields simultaneously.
To understand this biologically, we need to understand that the nature of consciousness as we know it and all its key physical and biological features, arose in a single topological transition in the eucaryote endosymbiosis, when the cell membrane became freed for edge-of-chaos excitation and receptor-based social signalling, through the same processes that are key to human neurodynamics today, when respiration became sequestered in the mitochondria. This in turn led to the action potential via the flagellar escape reaction, and to the graded membrane potentials and neurotransmitter receptor-based synaptic neural networks we see in neuronal excitation. It took a billion years later before these purposive processes enabling sentience at the cellular level, in the quantum processes we now witness in vision, audition, olfaction and feeling sensation became linked in the colonial neural networks illustrated by hydra and later the more organised brains of arthropods, vertebrates and cephalopods. This means that a purely neural network view of cognition and consciousness is physically inadequate at the foundation. Moreover the brain derives its complexity not just from our genome which is vastly too small to generate the brain’s complexity, but interactive processes of cell migration in the developing brain that form a self organising system through mutual neuronal recognition by neurotransmitter type and mutual excitation/inhibition.
Of these theories, GNW is the closest to a broad brush strokes, empirically researched account. Neural network theories like Grossman’s ART generate crude necessary but insufficient conditions for consciousness because they lack almost all the biological principle involved. Pure abstract theories like IIT do likewise. Specialised quantum theories like Hameroff & Penrose are untenable both in current biology and fundamentally in evolutionary terms because they have been contrived as quantum back-pack of oddball quantum processes such as quantum microtubular CAs, not confluent with evolutionary processes, using increasingly contrived speculation to make up for inadequacies e.g. in linking cellular processes through condensates. ORCH is also objective reduction, so it cannot address conscious volition.
There is good empirical support for two processes in brain dynamics. (1) Edge-of-chaos transitions from a higher energy more disordered dynamic to a less disordered attractor dynamic, which is also the basis of annealing in neural network models of a potential energy landscape. (2) Phase tuning between action potential timing in individual neurons and continuous local potential gradients, forming an analogue with quantum uncertainty based measurement of wave beats.
These mean that field and field like-theories such as ZPF, DQF and PEM all have a degree pf plausibility complementing bare neural network descriptions. However all these theories run into the problem of citing preferred physical mechanisms over the complex quantum system picture manifest in tissue dynamics. ZPF cites the zero-point field, effectively conflating a statistical semi-classical of QED with subjective consciousness as the quantum vacuum. It cites neurotransmitter molecular resonances at the synapse and periodic resonances in the brain as providing the link. DQF is well grounded in Freeman dynamics, but cites water molecule structures, which are plausible but accessory and not easy to test. PEM cites quasi-polaritonic waves involving interaction between charges and dipoles, with an emphasis on delocalised orbitals, which are just one of many quantum level processes prominently involved in respiration and photosynthesis and makes a claim to "microfeels" as the foundation of a definition of precognitive information below the level of consciousness. It also restricts itself to multiscale thermodynamic holonomic processes, eliminating the quantum level, self organised criticality and fractality.
The position of Symbiotic Existential Cosmology is that none of these theories, and particularly those that depend on pure physical materialism, have any prospect of solving the hard problem and particularly the hard problem extended to volition. Symbiotic Existential Cosmology therefore adopts a counter strategy to add an additional axiom to quantum cosmology that associates primal subjectivity and free will with an interface in each quantum, where “consciousness” is manifested in the special relativistic space-time extended wave function and "free will" is manifested in the intrinsic uncertainty of quantum collapse to the particle state. This primal subjectivity exists in germinal forms in unstable quantum-sensitive systems such as butterfly effect systems and becomes intentional consciousness as we know it in the eucaryote transition.
This transforms the description of conscious dynamics into one in which subjectivity is compliant with determined perceived and cognitive factors but utilises the brain state as a contextual environmental filter to deal with states of existential uncertainty threatening the survival of the organism. This is similar to AST, but without the utopian artificial intelligence emphasis it shares with others such as ART, IIT, and PEM. Key environmental survival questions are both computationally intractable and formally uncomputable, because the tiger that may pounce is also a conscious agent who can adapt their volitional strategy to unravel any computational "solution”. This provides a clean physical cut, in which subjective consciousness remains compliant with the determined boundary conditions realised by the cognitive brain, but has decision-making ability in situations when cellular or brain dynamics becomes unstable and quantum sensitive. No causal conflict thus arises between conscious intent restricted to uncertainty and physical causes related to the environmental constraints. It invokes a model of quantum reality where uncertainty is not merely random, but is a function of unfolding environmental uncertainty as a whole. This is the survival advantage cellular consciousness fixed in evolution through anticipating existential crises and has conserved ever since, complementing cerebral cognition in decision-making. This is reflected experientially in how we make intuitive "hunch" overall decisions and physically in certain super-causal forms of the transactional QM interpretation and super-determinism, both of which can have non-random quasi-ergodic hidden variable interpretations and are compatible with free will.
The final and key point is that Symbiotic Existential Cosmology is biospherically symbiotic. Through this, the entire cosmology sees life and consciousness as the ultimate interactive climactic crisis of living complexity interactively consummating the universe, inherited from cosmological symmetry-breaking, in what I describe as conscious paradise on the cosmic equator in space-time. Without the symbiosis factor, humanity as we stand, will not survive a self-induced Fermi extinction, caused by a mass extinction of biodiversity, so the cosmology is both definitively and informatively accurate and redemptive, in long-term survival of the generations of life over evolutionary time scales.
Susan Pockett (2013) explains the history of these diverging synaptic and field theoretic views:
Köhler (1940) did put forward something he called “field theory”. Köhler only ever referred to electric fields as cortical correlates of percepts. His field theory was a theory of brain function. Lashley’s test was to lay several gold strips across the entire surface of one monkey’s brain, and insert about a dozen gold pins into a rather small area of each hemispheric visual cortex of another monkey. The idea was that these strips or pins should short-circuit the hypothesized figure currents, and thereby (if Köhler’s field theory was correct) disrupt the monkeys’ visual perception. The monkeys performed about as well on this task after insertion of the pins or strips as they had before (although the one with the inserted pins did “occasionally fail to see a small bit of food in the cup”) and Lashley felt justified in concluding from this that “the action of electric currents, as postulated by field theory, is not an important factor in cerebral integration.” Later Roger Sperry did experiments similar to Lashley’s, reporting similarly negative results.
Intriguingly, she notes that Libet, whom we shall meet later despite declaring the readiness potential preceded consciousness, also proposed a near-supernatural field theory:
Libet proposed in 1994 that consciousness is a field which is “not ... in any category of known physical fields, such as electromagnetic, gravitational etc” (Libet 1994). In Libet’s words, his proposed Conscious Mental Field “may be viewed as somewhat analogous to known physical fields ... however ... the CMF cannot be observed directly by known physical means.”
Pockett (2014) describes what she calls “process theories”:
The oldest classification system has two major categories, dualist and monist. Dualist theories equate consciousness with abstracta. Monist (aka physicalist) theories equate it with concreta. A more recent classification (Atkinson et al., 2000) divides theories of consciousness into process theories and vehicle theories: it says “Process theories assume that consciousness depends on certain functional or relational properties of representational vehicles, namely, the computations in which those vehicles engage. The relative number of words devoted to process and vehicle theories in this description hints that at present, process theories massively dominate the theoretical landscape. But how sensible are they really?
She then discusses both Tonioni & Koch’s (2015) IIT integrated information theory and Chalmers' (1996) multi-state “information spaces". And lists the following objections:
First, since information is explicitly defined by everyone except process theorists as an objective entity, it is not clear how process theorists can reasonably claim either that information in general, or that any subset or variety of information in particular, is subjective. No entity can logically be both mind-independent and the very essence of mind. Therefore, when process theorists use the word “information” they must be talking about something quite different from what everyone else means by that word. Exactly what they are talking about needs clarification. Second, since information is specifically defined by everybody (including Chalmers) as an abstract entity, any particular physical realization of information does not count as information at all. Third, it is a problem at least for scientists that process theories are untestable. The hypothesis that a particular brain process correlates with consciousness can certainly be tested empirically. But the only potentially testable prediction of theories that claim identity between consciousness and a particular kind of information or information processing is that this kind of information or information processing will be conscious no matter how it is physically instantiated.
These critiques will apply to a broad range of the theories of consciousness we have explored, including many in the figure above that do not limit themselves to the neural correlate of consciousness.
Theories of consciousness have, in the light of our understanding of brain processes gained from neuroscience, become heavily entwined with the objective physics and biology of brain function. Michel & Doerig (2021), in reviewing local and global theories of consciousness summarise current thinking, illustrating this dependence on neuroscience for understanding the enigmatic nature of consciousness.
Localists hold that, given some background conditions, neural activity within sensory modules can give rise to conscious experiences. For instance, according to the local recurrence theory, reentrant activity within the visual system is necessary and sufficient for conscious visual experiences. Globalists defend that consciousness involves the large-scale coordination of a variety of neuro-cognitive modules, or a set of high-level cognitive functions such as the capacity to form higher-order thoughts about one’s perceptual states. Localists tend to believe that consciousness is rich, that it does not require attention, and that phenomenal consciousness overflows cognitive access. Globalists typically hold that consciousness is sparse, requires attention, and is co-extensive with cognitive access.
According to local views, a perceptual feature is consciously experienced when it is appropriately represented in sensory systems, given some background conditions. As localism is a broad family of theories, what “appropriately” means depends on the local theory under consideration. Here, we consider only two of the most popular local theories: the micro-consciousness theory, and the local recurrence theory, focusing on the latter. According to the micro-consciousness theory “processing sites are also perceptual sites”. This theory is extremely local. The simple fact of representing a perceptual feature is sufficient for being conscious of that feature, given some background conditions. One becomes conscious of individual visual features before integrating them into a coherent whole. According to the local recurrence theory, consciousness depends on "recurrent" activity between low- and higher-level sensory areas. Representing a visual feature is necessary, but not sufficient for being conscious of it. The neural vehicle carrying that representation must also be subject to the right kind of recurrent dynamics. For instance, consciously perceiving a face consists in the feedforward activation of face selective neurons, quickly followed by a feedback signal to lower-level neurons encoding shape, color, and other visual features of the face, which in turn modulate their activity as a result.
The authors also stress post-dictive effects as a necessary non-local condition for consciousness which may last a third of a second after an event.
In postdictive effects, conscious perception of a feature depends on features presented at a later time. For instance, in feature fusion two rapidly successive stimuli are perceived as a single entity. When a red disk is followed by a green disk after 20ms, participants report perceiving a single yellow disk, and no red or green disk at all. This is a postdictive effect. Both the red and green disks are required to form the yellow percept. The visual system must store the representation of the first disk until the second disk appears to integrate both representations into the percept that subjects report having. Many other postdictive effects in the range of 10-150ms have been known for decades and are well documented. Postdictive effects are a challenge for local theories of consciousness. Features are locally represented in the brain but the participants report that they do not see those features.
This can have the implication that unconscious brain processes always precede conscious awareness, leading to the conclusion that our conscious awareness is just a post-constructed account of unconscious processes generated by the brain and that subjective consciousness, along the experience of volition have no real basis, leading to a purely physically materialist account of subjective consciousness as merely an internal model of reality constructed by the brain.
Pockett (2014) in supporting her own field theory of consciousness, notes structural features that may exclude certain brain regions from being conscious in their own right:
It is now well accepted that sensory consciousness is not generated during the first, feed-forward pass of neural activity from the thalamus through the primary sensory cortex. Recurrent activity from other cortical areas back to the primary or secondary sensory cortex is necessary. Because the feedforward activity goes through architectonic Lamina 4 of the primary sensory cortex (which is composed largely of stellate cells and thus does not generate synaptic dipoles) while recurrent activity operates through synapses on pyramidal cells (which do generate dipoles), the conscious em patterns resulting from recurrent activity in the ‘early’ sensory cortex have a neutral area in the middle of their radial pattern. The common feature of brain areas that can not generate conscious experience – which are now seen to include motor cortex as well as hippocampus, cerebellum and any sub-cortical area – is that they all lack an architectonic Lamina 4 [layer 4 of the cortex].
By contrast with theories of consciousness based on the brain alone, Symbiotic Existential Cosmology sees subjectivity as being a cosmological complement to the physical universe. It thus seeks to explain subjective conscious experience as a cosmological, rather than just a purely biological phenomenon, in a way which gives validation and real meaning to our experience of subjective conscious volition over the physical universe, expressed in all our behavioural activities and our sense of personal responsibility for our actions and leads towards a state of biospheric symbiosis as climax living diversity across the generations of life as a whole, ensuring our continued survival.
Theories of consciousness have, in the light of our understanding of brain processes gained from neuroscience, become heavily entwined with the objective physics and biology of brain function. Michel & Doerig (2021), in reviewing local and global theories of consciousness summarise current thinking, illustrating this dependence on neuroscience for understanding the enigmatic nature of consciousness.
Localists hold that, given some background conditions, neural activity within sensory modules can give rise to conscious experiences. For instance, according to the local recurrence theory, reentrant activity within the visual system is necessary and sufficient for conscious visual experiences. Globalists defend that consciousness involves the large-scale coordination of a variety of neuro-cognitive modules, or a set of high-level cognitive functions such as the capacity to form higher-order thoughts about one’s perceptual states. Localists tend to believe that consciousness is rich, that it does not require attention, and that phenomenal consciousness overflows cognitive access. Globalists typically hold that consciousness is sparse, requires attention, and is co-extensive with cognitive access.
According to local views, a perceptual feature is consciously experienced when it is appropriately represented in sensory systems, given some background conditions. As localism is a broad family of theories, what “appropriately” means depends on the local theory under consideration. Here, we consider only two of the most popular local theories: the micro-consciousness theory, and the local recurrence theory, focusing on the latter. According to the micro-consciousness theory “processing sites are also perceptual sites”. This theory is extremely local. The simple fact of representing a perceptual feature is sufficient for being conscious of that feature, given some background conditions. One becomes conscious of individual visual features before integrating them into a coherent whole. According to the local recurrence theory, consciousness depends on "recurrent" activity between low- and higher-level sensory areas. Representing a visual feature is necessary, but not sufficient for being conscious of it. The neural vehicle carrying that representation must also be subject to the right kind of recurrent dynamics. For instance, consciously perceiving a face consists in the feedforward activation of face selective neurons, quickly followed by a feedback signal to lower-level neurons encoding shape, color, and other visual features of the face, which in turn modulate their activity as a result.
The authors also stress post-dictive effects as a necessary non-local condition for consciousness which may last a third of a second after an event.
In postdictive effects, conscious perception of a feature depends on features presented at a later time. For instance, in feature fusion two rapidly successive stimuli are perceived as a single entity. When a red disk is followed by a green disk after 20ms, participants report perceiving a single yellow disk, and no red or green disk at all. This is a postdictive effect. Both the red and green disks are required to form the yellow percept. The visual system must store the representation of the first disk until the second disk appears to integrate both representations into the percept that subjects report having. Many other postdictive effects in the range of 10-150ms have been known for decades and are well documented. Postdictive effects are a challenge for local theories of consciousness. Features are locally represented in the brain but the participants report that they do not see those features.
This can also have implications that unconscious brain processes always precede conscious awareness, leading to the conclusion that our conscious awareness is just a post-constructed account of unconscious processes generated by the brain and that subjective consciousness, along the experience of volition have no real basis, leading to a purely physically materialist account of subjective consciousness as merely an internal model of reality constructed by the brain.
Symbiotic Existential Cosmology rejects such views and sees subjectivity as being a cosmological complement to the physical universe. It thus seeks to explain subjective conscious experience as a cosmological, rather than just a purely biological phenomenon, in a way which gives validation and real meaning to our experience of subjective conscious volition over the physical universe, expressed in all our behavioural activities and our sense of personal responsibility for our actions and leads towards a state of biospheric symbiosis as climax living diversity across the generations of life as a whole, ensuring our continued survival.
Field Theories of Consciousness
Joachim Keppler (2018, 2021) presents an analysis drawing conscious experiences into the orbit of stochastic electrodynamics (SED) a form of quantum field theory, utilising the conception that the universe is imbued with an all-pervasive electromagnetic background field, the zero-point field (ZPF), which, in its original form, is a homogeneous, isotropic, scale-invariant and maximally disordered ocean of energy with completely uncorrelated field modes and a unique power spectral density. This is basically a stochastic treatment of the uncertainty associated with the quantum vacuum in depictions such as the Feynman approach to quantum electrodynamics (fig 71(e)). The ZPF is thus the multiple manifestations of uncertainty in the quantum vacuum involving virtual photons, electrons and positrons, as well as quarks and gluons, implicit in the muon's anomalous magnetic moment (Borsanyi et al. 2021).
In the approach of SED (de la Peña et al. 2020), in which the stochastic aspect corresponds to the effects of the collapse process into the classical limit [28], consciousness is represented by the zero point field (ZPF) (Keppler 2018). This provides a basis to discuss the brain dynamics accompanying conscious states in terms of two hypotheses concerning the zero-point field (ZPF):
“The aforementioned characteristics and unique properties of the ZPF make one realize that this field has the potential to provide the universal basis for consciousness from which conscious systems acquire their phenomenal qualities. On this basis, I posit that all conceivable shades of phenomenal awareness are woven into the fabric of the background field. Accordingly, due to its disordered ground state, the ZPF can be looked upon as a formless sea of consciousness that carries an enormous range of potentially available phenomenal nuances. Proceeding from this postulate, the mechanism underlying quantum systems has all the makings of a truly fundamental mechanism behind conscious systems, leading to the assumption that conscious systems extract their phenomenal qualities from the phenomenal color palette immanent in the ZPF. ”
Fig 80: In Keppler's
model, the phase transitions underlying the formation of coherent activity
patterns (attractors) are triggered by modulating the concentrations of
neurotransmitters. When the concentration of neurotransmitter molecules lies
above a critical threshold and selected ZPF modes are in resonance with the
characteristic transition frequencies between molecular energy levels, receptor
activations ensue that drive the emergence of neuronal avalanches. The set of
selected ZPF modes that is involved in the formation and stabilisation of an
attractor determines the phenomenal properties of the conscious state.
His description demonstrates the kind of boundary conditions in brain dynamics likely to correspond to subjective states and thus provides a good insight into the stochastic uncertainties of brain dynamics of conscious states that would correspond to the subjective aspect, and it even claims to envelop all possible modes of qualitative subjectivity in the features of the ZPF underlying uncertainty, But it would remain to be established that the ZPF can accomodate all the qualitative variations spanning the senses of sight, sound and smell, which may rather correspond to the external quantum nature of these senses.
The ZPF does not of itself solve the hard problem as such, because, at face value it is a purely physical manifestation of quantum uncertainty with no subjective manifestation, however Keppler claims to make this link clear as well: A detailed comparison between the findings of SED and the insights of Eastern philosophy reveals not only a striking congruence as far as the basic principles behind matter are concerned. It also gives us the important hint that the ZPF is a promising candidate for the carrier of consciousness, suggesting that consciousness is a fundamental property of the universe, that the ZPF is the substrate of consciousness and that our individual consciousness is the result of a dynamic interaction process that causes the realization of ZPF information states. …In that it is ubiquitous and equipped with unique properties, the ZPF has the potential to define a universally standardized substratum for our conscious minds, giving rise to the conjecture that the brain is a complex instrument that filters the varied shades of sensations and emotions selectively out of the all-pervasive field of consciousness, the ZPF (Keppler, 2013).
In personal communication regarding these concerns, Joachim responds as follows:
I understand your reservations about conventional field theories of consciousness. The main problem with these approaches (e.g., McFadden’s approach) is that they cannot draw a dividing line between conscious and unconscious field configurations. This leads to the situation that the formation of certain field configurations in the brain is claimed to be associated with consciousness, while the formation of the same (or similar) field configurations in an electronic device would usually not be brought in relation with consciousness. This is what you call quite rightly a common category error. Now, the crucial point is that the ZPF, being the primordial basis of the electromagnetic interaction, offers a way to avoid this category error. According to the approach I propose, the ZPF (with all its field modes) is the substrate of consciousness, everywhere and unrestrictedly. The main difference between conscious and unconscious systems (processes) is their ability to enter into a resonant coupling with the ZPF, resulting in an amplification of selected ZPF modes. Only a special type of system has this ability (the conditions are described in my article). If a system meets the conditions, one must assume that it also has the ability to generate conscious states.
Without accepting any materialistic notion of quantum fields being identifiable with subjective consciousness, this does provide a basis confluent with the description invoked in this article, which uses the infinite number of ground states in quantum field theory, as opposed to quantum mechanics to thermodynamically model memory states and the global amplitude and frequency-modulated binding in the EEG. The dissipative quantum model of brain dynamics (Freeman W & Vitiello 2006, 2007, 2016, Capolupo A, Freeman & Vitiello 2013, Vitiello 2015, Sabbadini & Vitiello 2019) provides another field theoretic description.
The dissipative quantum model of brain dynamics (Freeman W & Vitiello 2006, 2007, 2016, Capolupo A, Freeman & Vitiello 2013, Vitiello 2015, Sabbadini & Vitiello 2019) provides another field theoretic description. I include a shortened extract from Freeman & Vitiello (2015), which highlights to me the most outstanding field theoretic description of the neural correlate of consciousness I know of, which also has the support of Freeman’s dynamical attractor dynamics as illustrated in fig 78, and likewise has similar time dual properties to the transactional interpretation discussed above, invoking complementary time directed-roles of emergence and imagination:
Fig 81: Molecular
biology is a theme and variations on the polar and non-polar properties of
organic molecules residing in an aqueous environment. Nucleotide double
helices, protein folding and micelle structures, as well as membranes, are all
energetically maintained by their surrounding aqueous structures. Water has one
of the highest specific heats of all, because of the large number of internal
dynamic quantum states. Myoglobin (Mb) the
oxygen transporting protein in muscle, containing a heme active site
illustrates this (Ansari et al. 1984), both in its functionally important
movements (fim) and its equilibrium fluctuations invoking fractal energetics
between it high and low energy states of Mb and MbCO. This activity in turn is
stabilised both by non polar side chains maintaining the aqueous structure and
polar side chains interacting with the aqueous environment to form water
hydration structures (top left) The hydration shell of myoglobin (blue surface)
with 1911 water molecules (CPK model), the approximate number needed for
optimal function (Vajda & Perczel 2014). Lower: Here we show that molecules taking part in biochemical processes from small molecules to proteins are critical quantum mechanically. Electronic Hamiltonians of biomolecules are tuned exactly to the critical point of the metal-insulator transition separating the Anderson localized insulator phase from the conducting disordered metal phase. Left: The HOMO/LUMO orbitals for Myoglobin calculated with the Extended Hückel method. Right: Generalized fractal dimensions Dq of the wave functions (Vattay et al. 2015).
We began by using classical physics to model the dendritic integration of cortical dynamics with differential equations, ranging in complexity from single positive loops in memory through to simulated intentional behavior (Kozma and Freeman 2009). We identified the desired candidate form in a discrete electrochemical wave packet embedded in the electroencephalogram (EEG), often with the form of a vortex like a hurricane, which carried a spatial pattern of amplitude modulation (AM) that qualified as a candidate for thought content.
Measurement of scalp EEG in humans (showed that the size and speed of the formation of wave packets were too big to be attributed to the classical neurophysiology of neural networks, so we explored quantum approaches. In order to use dissipative quantum field theory it is necessary to include the impact of brain and body on the environment. Physicists do this conceptually and formally by doubling the variables (Vitiello 1995, 2001, Freeman and Vitiello 2006) that describe dendritic integration in the action-perception cycle. By doing so they cre- ate a Double, and then integrate the equations in reverse time, so that every source and sink for the brain-body is matched by a sink or source for the Double, together creating a closed system.
Fig 82: Field theory model of inward
projecting electromagnetic fields
overlapping in basal brain centres (MacIver
2022).
On convergence to the attractor the neural activity in each sensory cortex condenses from a gas-like regime of sparse, disordered firing of action potentials at random intervals to a liquid-like macroscopic field of collective activity. The microscopic pulses still occur at irregular intervals, but the probability of firing is no longer random. The neural mass oscillates at the group frequencies, to which the pulses conform in a type of time multiplexing. The EEG or ECoG (electrocorticogram) scalar field during the liquid phase revealed a burst of beta or gamma oscillation we denoted as a wave packet. Its AM patterns provided the neural correlates of perception and action. The surface grain inferred that the information capacity of wave packets is very high. The intense electrochemical energy of the fields was provided everywhere by the pre-existing trans-membrane ionic concentration gradients.
The theory cites water molecules and the cytosol as the basis for the quantum field description, a position supported at the molecular level by the polarisation of the cytoplasmic medium and all its constituents between aqueous polar and hydrophobic non-polar energetics as illustrated in fig 81.
Neurons, glia cells and other physiological units are [treated as] classical objects. The quantum degrees of freedom of the model are associated to the dynamics of the electrical dipoles of the molecules of the basic components of the system, i.e. biomolecules and molecules of the water matrix in which they are embedded. The coherence of the long-range correlations is of the kind described by quantum field theory in a large number of physical systems, in the standard model of particle physics as well as in condensed matter physics, ranging from crystals to magnets, from superconductive metals to superfluids. The coherent states characterizing such systems are stable in a wide range of temperatures.
In physiological terms the field consists of heightened ephaptic [0] excitability in an interactive region of neuropil, which creates a dominant focus by which every neuron is sensitized, and to which every neuron contributes its remembrance. In physical terms, the dynamical output of the many-body interaction of the vibrational quanta of the electric dipoles of water molecules and other biomolecules energize the neuropil, the densely compartmentalized tissue of axons, dendrites and glia through which neurons force ionic currents. The boson condensation provides the long-range coherence, which in turn allows and facilitates synaptic communication among neuron populations.
The stages of activation of the quantum field boson condensation correspond closely to stages of the Freeman attractor dynamics investigated empirically in the EEG and ECoG:
We conceive each action-perception cycle as having three stages, each with its neurodynamics and its psychodynamics (Freeman 2015). Each stage has at least one phase transition and may have two or more before the next stage. In the first stage a boson condensation forms a gamma wave packet by a phase transition in each of the primary sensory cortices. Only in stage one a phase transition would occur in a single cortex. In stage two the entorhinal cortex integrates all modalities before making a gestalt.
When the boson condensation carrying its AM pattern invades and recruits the amygdala and hypothalamus, we propose that this correlates with awareness of emotion and value with incipient awareness of content. In the second stage a more extended boson condensation forms a larger wave packet in the beta range that extends through the entire limbic system including the entorhinal cortex, which is central in an AM pattern. We believe it correlates with a flash memory unifying the multiple primary percepts into a gestalt, for which the time and place of the subject forming the gestalt are provided by the hippocampus. A third phase transition forms a boson condensation that sustains a global AM pattern, the manifestations of which in the EEG extend over the whole scalp. We propose that the global AM pattern is accompanied by comprehension of the stimulus meaning, which constitutes an up-to-date status summary as the basis for the next intended action.
The dual time representation of the quantum field and its double invokes the key innovative and anticipatory features of conscious imagination:
Open systems require an environment to provide the sink where their waste energy goes, and a source of free energy which feeds them. From the standpoint of the energy flux balance, brains describe the relevant restructured part of the environment using the time-reversed copy of the system, its complement or Double (Vitiello 2001). Where do the hypotheses come from? The answer is: from imagination. In theory the best sources for hypotheses are not memories as they appear in experience, but images mirrored backward in time. The imaginings are not constrained by thermodynamics. The mirror sinks and sources are imagined, not emergent. From this asymmetry we infer that the mirror copy exists as a dynamical system of nerve energy, by which the Double produces its hypotheses and predictions, which we experience as perception, and which we test by taking action. It is the Double that imagines the world outside, free from the shackles of thermodynamic reality. It is the Double that soars.
Johnjoe Mcfadden (2020) likewise has a theory of consciousness associated with the electromagnetic wave properties of the brain’s EM field interacting with the matter properties of “unconscious” neuronal processing. In his own words he summarises his theory as follows:
I describe the conscious electromagnetic information (cemi) field theory which has proposed that consciousness is physically integrated, and causally active, information encoded in the brain’s global electromagnetic (EM) field. I here extend the theory to argue that consciousness implements algorithms in space, rather than time, within the brain’s EM field. I describe how the cemi field theory accounts for most observed features of consciousness and describe recent experimental support for the theory. … The cemi field theory differs from some other field theories of consciousness in that it proposes that consciousness — as the brain’s EM field — has outputs as well as inputs. In the theory, the brain’s endogenous EM field influences brain activity in a feedback loop (note that, despite its ‘free’ adjective, the cemi field’s proposed influence is entirely causal acting on voltage-gated ion channels in neuronal membranes to trigger neural firing.
The lack of correlation between complexity of information integration and conscious thought is also apparent in the common-place observation that tasks that must surely require a massive degree of information integration, such as the locomotory actions needed to run across a rugged terrain, may be performed without awareness but simple sensory inputs, such as stubbing your toe, will over-ride your conscious thoughts. The cemi field theory proposes that the non-conscious neural processing involves temporal (computational) integration whereas operations, such as natural language comprehension, require the simultaneous spatial integration provided by the cemi field. … Dehaene (2014) has recently described four key signatures of consciousness: (i) a sudden ignition of parietal and prefrontal circuits; (ii) a slow P3 wave in EEG; (iii) a late and sudden burst of high-frequency oscillations; and (iv) exchange of bidirectional and synchronized messages over long distances in the cortex. It is notable that the only feature common to each of these signatures—aspects of what Dehaene calls a ‘global ignition’ or ‘avalanche’—is large endogenous EM field perturbations in the brain, entirely consistent with the cemi field theory.
A fourth field-like theory based rather on classical information mediated by quasi particles and their quasi-polaritonic waves is also reviewed and critiqued in Quasi-particle Materialism.
Earlier John Eccles (1986) proposed a brain mind identity theory involving psychon quasi-particles mediating uncertainty of synaptic transmission to complementary dendrons cylindrical bundles of neurons arranged vertically in the six outer layers or laminae of the cortex. Eccles proposed that each of the 40 million dendrons is linked with a mental unit, or "psychon", representing a unitary conscious experience. In willed actions and thought, psychons act on dendrons and, for a moment, increase the probability of the firing of selected neurons through quantum tunnelling effect in synaptic exocytosis, while in perception the reverse process takes place. This model has been elaborated by a number of researchers (Eccles 1990, 1994, Beck & Eccles 1992, Georgiev 2002, Hari 2008). The difficulty with the theory is that the psychons are then physical quasi-particles with integrative mental properties. So it’s a contradictory description that doesn’t manifest subjectivity except by its integrative physical properties.
Summarising the state of play, we have two manifestations of consciousness at the interface with objective physical description, (a) the hard problem of consciousness and (b) the problem of quantum measurement, both of which are in continual debate. Together these provide complementary windows on the abyss in the scientific description and a complete solution of existential cosmology that we shall explore in this article.
Neural Nets versus Biological Brains
Steven Grossberg is recognised for his contribution to ideas using nonlinear systems of differential equations such as laminar computing, where the layered cortical structures of mammalian brains provide selective advantages, and for complementary computing, which concerns the idea that pairs of parallel cortical processing streams compute complementary properties in the brain, each stream having complementary computational strengths and weaknesses, analogous to physical complementarity in the uncertainty principle. Each can possess multiple processing stages realising a hierarchical resolution of “uncertainty”, which here means that computing one set of properties at a given stage prevents computation of a complementary set of properties at that stage.
“Conscious Mind, Resonant Brain” (Grossberg 2021) provides a panoramic model of the brain, from neural networks to network representations of conscious brain states. In so doing, he presents a view based on resonant non-linear systems, which he calls adaptive resonance theory (ART), in which a subset of “resonant” brain states are associated with conscious experiences. While I applaud his use of non-linear dynamics, ART is a structural abstract neural network model and not what I as a mathematical dynamicist conceive of as "resonance", compared with the more realistic GNW, or global neuronal workspace model.
The
primary intuition behind the ART model is that object identification and
recognition generally occur as a result of the interaction of 'top-down'
observer expectations with 'bottom-up' sensory information. The model
postulates that 'top-down' expectations take the form of a memory template or
prototype that is then compared with the actual features of an object as
detected by the senses. This comparison gives rise to a measure of category
belongingness. As long as this difference between sensation and expectation
does not exceed a set threshold called the 'vigilance parameter', the sensed
object will be considered a member of the expected class. The system thus
offers a solution to the 'plasticity/stability' problem, i.e. the problem of
acquiring new knowledge without disrupting existing knowledge that is also
called incremental learning.
The basic ART structure.
The work shows in detail how and why multiple processing stages are needed before the brain can construct a complete and stable enough representation of the information in the world with which to predict environmental challenges and thus control effective behaviours. Complementary computing and hierarchical resolution of uncertainty overcome these problems until perceptual representations that are sufficiently complete, context-sensitive, and stable can be formed. The brain regions where these representations are completed are different for seeing, hearing, feeling, and knowing.
His proposed answer is that a resonant state is generated that selectively “lights up” these representations and thereby renders them conscious. These conscious representations can then be used to trigger effective behaviours:
My proposed answer is: A resonant state is generated that selectively “lights up” these representations and thereby renders them conscious. These conscious representations can then be used to trigger effective behaviors. Consciousness hereby enables our brains to prevent the noisy and ambiguous information that is computed at earlier processing stages from triggering actions that could lead to disastrous consequences. Conscious states thus provide an extra degree of freedom whereby the brain ensures that its interactions with the environment, whether external or internal, are as effective as possible, given the information at hand.
He addresses the hard problem of consciousness in its varying aspects:
As Chalmers (1995) has noted: “The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. ... Even after we have explained the functional, dynamical, and structural properties of the conscious mind, we can still meaningfully ask the question, Why is it conscious? There seems to be an unbridgeable explanatory gap between the physical world and consciousness. All these factors make the hard problem hard. … Philosophers vary passionately in their views between the claim that no Hard Problem remains once it is explained how the brain generates experience, as in the writings of Daniel Dennett, to the claim that it cannot in principle be solved by the scientific method, as in the writings of David Chalmers. See the above reference for a good summary of these opinions.
Grossberg demonstrates that, over and above information processing, our brains sometimes go into a context-sensitive resonant state that can involve multiple brain regions. He explores experimental evidence that “all conscious states are resonant states” but not vice versa. Showing that, since not all brain dynamics are “resonant”, consciousness is not just a “whir of information-processing”:
When does a resonant state embody a conscious experience? “Why is it conscious”? And how do different resonant states support different kinds of conscious qualia? The other side of the coin is equally important: When does a resonant state fail to embody a conscious experience? Advanced brains have evolved in response to various evolutionary challenges in order to adapt to changing environments in real time. ART explains how consciousness enables such brains to better adapt to the world’s changing demands.
Grossberg is realistic about the limits on a scientific explanation of the hard problem:
It is important to ask: How far can any scientific theory go towards solving the Hard Problem? Let us suppose that a theory exists whose neural mechanisms interact to generate dynamical states with properties that mimic the parametric properties of the individual qualia that we consciously experience, notably the spatio-temporal patterning and dynamics of the resonant neural representations that represent these qualia. Suppose that these resonant dynamical states, in addition to mirroring properties of subjective reports of these qualia, predict properties of these experiences that are confirmed by psychological and noninvasive neurobiological experiments on humans, and are consistent with psychological, multiple-electrode neurophysiological data, and other types of neurobiological data that are collected from monkeys who experience the same stimulus conditions.
He then develops a strategy to move beyond the notion of the neural correlate of consciousness (Crick & Koch 1990), claiming these states are actually the physical manifestation of the conscious state:
Given such detailed correspondences with experienced qualia and multiple types of data, it can be argued that these dynamical resonant states are not just “neural correlates of consciousness” that various authors have also discussed, notably David Chalmers and Christof Koch and their colleagues. Rather, they are mechanistic representations of the qualia that embody individual conscious experiences on the psychological level. If such a correspondence between detailed brain representations and detailed properties of conscious qualia occurs for a sufficiently large body of psychological data, then it would provide strong evidence that these brain representations create and support these conscious experiences. A theory of this kind would have provided a linking hypothesis between brain dynamics and the conscious mind. Such a linking hypothesis between brain and mind must be demonstrated before one can claim to have a “theory of consciousness”.
However he then delineates the claim that this is the most compete scientific account of subjective experience possible, while conceding that it may point to a cosmological problem akin those in relativity and quantum theory:
If, despite such a linking hypothesis, a philosopher or scientist claims that, unless one can “see red” or “feel fear” in a theory of the Hard Problem, then it does not contribute to solving that problem, then no scientific theory can ever hope to solve the Hard Problem. This is true because science as we know it cannot do more than to provide a mechanistic theoretical description of the dynamical events that occur when individual conscious qualia are experienced. However, as such a principled, albeit incrementally developing, theory of consciousness becomes available, including increasingly detailed psychological, neurobiological, and even biochemical processes in its explanations, it can dramatically shift the focus of discussions about consciousness, just as relativity theory transformed discussions of space and time, and quantum theory of how matter works. As in quantum theory, there are measurement limitations in understanding our brains.
Although he conceives of brain dynamics as being poised just above the level of quantum effects in vision and hearing, Grossberg sees brains as a new frontier of scientific discovery subject to the same principles of complementarity and uncertainty as arise in quantum physics:
Since brains form part of the physical world, and interact ceaselessly with it to adapt to environmental challenges, it is perhaps not surprising that brains also obey principles of complementarity and uncertainty. Indeed, each brain is a measurement device for recording and analyzing events in the physical world. In fact, the human brain can detect even small numbers of the photons that give rise to percepts of light, and is tuned just above the noise level of phonons that give rise to percepts of sound.
Complementarity and uncertainty principles also arise in physics, notably in quantum mechanics. Since brains form part of the physical world, and interact ceaselessly with it to adapt to environmental challenges, it is perhaps not surprising that brains also obey principles of complementarity and uncertainty. Indeed, each brain is a measurement device for recording and analyzing events in the physical world. In fact, the human brain can detect even small numbers of the photons that give rise to percepts of light, and is tuned just above the noise level of phonons that give rise to percepts of sound.
The Uncertainty Principle identified complementary variables, such as the position and momentum of a particle, that could not both be measured with perfect precision. In all of these theories, however, the measurer who was initiating and recording measurements remained out- side the measurement process. When we try to understand the brain, this is no longer possible. The brain is the measurement device, and the process of understanding mind and brain is the study of how brains measure the world. The measurement process is hereby brought into physical theory to an unprecedented degree.
Fig
83: Brain centres involved in intentional behaviour and subjectively conscious
physical volition: (a) The cortex overlaying the basal ganglia, thalamus and
amygala and substantia nigra involved in planned action, motivation and
volition. (b) The interactive circuits in the cortex, striatum and thalamus
facilitating intentional motor bahaviour. (c) The Motivator model clarifies how
the basal ganglia and amygdala coordinate their complementary functions in the
learning and performance of motivated acts. Brain areas can be divided into
four regions that process information about conditioned stimuli (CSs) and
unconditioned stimuli (USs). (a) Object Categories represent visual or
gustatory inputs, in anterior inferotemporal (ITA) and rhinal (RHIN) cortices;
(b) Value Categories represent the value of anticipated outcomes on the basis
of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH);
(c) Object-Value Categories resolve the value of competing perceptual stimuli
in medial (MORB) and lateral (ORB) orbitofrontal cortex; and (d) the Reward
Expectation Filter in the basal ganglia detects the omission or delivery of
rewards using a circuit that spans ventral striatum (VS), ventral pallidum
(VP), striosomes of the striatum, the pedunculopontine nucleus (PPTN) and
midbrain dopaminergic neurons of the SNc/VTA (substantia nigra pars
compacta/ventral tegmental area). The network model connecting brain regions is
consistent with both quantum and classical approaches and in no way eliminates
subjective conscious volition from having an autonomous role. All it implies is
that conscious volition arises from an evolved basis in these circuit
relationships in mammals.
Grossberg sees the brain as presenting new issues for science as measurement devices confounding their separation between measured effect and the observer making a quantum measurement:
Since brains are also universal measurement devices, how do they differ from these more classical physical ideas? I believe that it is the brain’s ability to rapidly self-organize, through development and life-long learning, that sets it apart from previous physical theories. The brain thus represents a new frontier in measurement theory for the physical sciences, no less than the biological sciences. It remains to be seen how physical theories will develop to increasingly incorporate concepts about the self-organization of matter, and how these theories will be related to the special case of brain self-organization.
Experimental and theoretical evidence will be summarized in several chapters in support of the hypothesis that principles of complementarity and uncertainty that are realized within processing streams, better explain the brain’s functional organization than concepts about independent modules. Given this conclusion, we need to ask: If the brain and the physical world are both organized according to such principles, then in what way is the brain different from the types of physical theories that are already well-known? Why haven’t good theoretical physicists already “solved” the brain using known physical theories?
The brain’s universal measurement process can be expected to have a comparable impact on future science, once its implications are more broadly understood. Brain dynamics operate, however, above the quantum level, although they do so with remarkable efficiency, responding to just a few photons of light in the dark, and to faint sounds whose amplitude is just above the level of thermal noise in otherwise quiet spaces. Knowing more about how this exquisite tuning arose during evolution could provide important new information about the design of perceptual systems, no less than about how quantum processes interface with processes whose main interactions seem to be macroscopic.
In discussing the hierarchical feedback of the cortex and basal ganglia and the limbic system, Grossberg (2015) fluently cites both consciousness and volition as adaptive features of the brain as a self-organising system:
The basal ganglia control the gating of all phasic movements, including both eye movements and arm movements. Arm movements, unlike eye movements, can be made at variable speeds that are under volitional basal ganglia control. Arm movements realize the Three S’s of Movement Control; namely, Synergy, Synchrony, and Speed. … Many other brain processes can also be gated by the basal ganglia, whether automatically or through conscious volition. Several of these gating processes seem to regulate whether a top- down process subliminally primes or fully activates its target cells. As noted in Section 5.1, the ART Matching Rule enables the brain to dynamically stabilize learned memories using top-down attentional matching.
Such a volitionally-mediated shift enables top-down expectations, even in the absence of supportive bottom-up inputs, to cause conscious experiences of imagery and inner speech, and thereby to enable visual imagery, thinking, and planning activities to occur. Thus, the ability of volitional signals to convert the modulatory top-down priming signals into suprathreshold activations provides a great evolutionary advantage to those who possess it.
Such neurosystem models provide key insights into how processes associated with intentional acts and the reinforcement of sensory experiences through complementary adaptive networks, model the neural correlate of conscious volitional acts and their smooth motor execution in the world at large. As they stand, these are still classical objective models that do not actually invoke conscious volition as experienced, but they do provide deep insight into the brain’s adaptive processes accompanying subjective conscious volition.
My critique, which this is clear and simple, is that these designs remove such a high proportion of the key physical principles involved in biological brain function that they can have no hope of modelling subjective consciousness or volition, despite the liberal use of these terms in the network designs, such as the basal ganglia as gateways. Any pure abstract neural net model, however much it adapts to “resonate" with biological systems is missing major fundamental formative physical principles of how brains actually work.
These include:
(A) The fact that biological neural networks are both biochemical and electrochemical in two ways (1) all electrochemical linkages, apart from gap junctions, work through the mediation of biochemical neurotransmitters and (2) the internal dynamics of individual neurons and glia are biochemical, not electrochemical.
(B) The fact that the electrochemical signals are dynamic and involve sophisticated properties including both (1) unstable dynamics at the edge of chaos and (2) phase coherence tuning between continuous potential gradients and action potentials.
(C) They involve both neurons and neuroglia working in complementary relationship.
(D) They involve developmental processes of cell migration determining the global architecture of the brain including both differentiation by the influence of neurotransmitter type and chaotic excitation in early development.
(E) This neglects the fact that evolution of biological brains as neural networks is built on the excitatory neurotransmitter-driven social signalling and quantum sentience of single celled eucaryotes, forming an intimately coupled society of amoebo-flagellate cells communicating by the same neurotransmitters as in single-celled eucaryotes, so these underlying dynamics are fundamental and essential to biological neural net functionality.
Everything from simple molecules such as ATP acting as the energy currency of the cell, through protein folding, to enzymes involve quantum effects, such as tunnelling at active sites, and ion channels are at the same level.
It is only a step from there to recognising that such biological processes are actually fractal non-IID (not identically independently-distributed quantum processes, not converging to the classical, in the light of Gallego & Dakić (2021), because their defining contexts are continually evolving, to thus provide a causally open view of brain dynamics, in which the extra degree of freedom provided by consciousness, that complements objective physical computation, arises partly through quantum uncertainty itself, in conscious volition becoming subjectively manifest, and ensuring survival under uncertain environmental threats.
However, this is not just a rational or mechanistically causal process. We evolved from generation upon generation of organisms surviving existential threats in the wild, which were predominantly solved by lightning fast hunch and intuition, and never by rational thought alone, except recently and all too briefly in our cultural epoch.
The great existential crises have always been about surviving environmental threats which are not only computationally intractable due to exponentiating degrees of freedom, but computationally insoluble because they involve the interaction of live volitional agents, each consciously violating the rules of the game.
Conscious volition evolved to enable subjective living agents to make hunch-like predictions of their own survival in contexts where no algorithmic or deterministic process, including the nascent parallelism of the cortex, limbic system and basal ganglia that Steve Grossberg has drawn attention to, could suffice, other than to define boundary conditions on conscious choices of volitional action. Conscious intentional will, given these constraints, remained the critical factor, complementing computational predictivity generated through non-linear dynamics, best predicting survival of a living organism in the quantum universe, which is why we still possess it.
When we come to the enigma of subjective conscious anticipation and volition under survival threats, these are clearly, at the physiological level, the most ancient and most strongly conserved. Although the brains of vertebrates, arthropods and cephalopods show vast network differences, the underlying processes generating consciousness remain strongly conserved to the extent that baby spiders display clear REM features during sleep despite having no obvious neural net correspondence. While graded membrane excitation is universal to all eucaryotes and shared by human phagocytes and amoeba, including the genes for the poisons used to kill bacteria, the action potential appears to have evolved only in flagellate eucaryotes, as part of the flagellar escape response to existential threat, later exemplified by the group flagellation of our choano-flagellate ancestor colonies.
All brains are thus intimate societies of dynamically-coupled excitable cells (neurons and glia) communicating through these same molecular social signalling pathways that social single celled eucaryotes use. Both strategic intelligence and conscious volition as edge-of-chaos membrane excitation in global feedback thus arose long before brains and network designs emerged.
Just as circuit design models can have predictive value, so does subjective conscious volition of the excitable eucaryote cell have clear survival value in evolution and hence predictive power of survival under existential threat, both in terms of arbitrary sensitivity to external stimuli at the quantum level and neurotransmitter generated social decision-making of the collective organism. Thus the basis of what we conceive of as subjective conscious volition is much more ancient and longer and more strongly conserved than any individual network model of the vertebrate brain and underlies all attempts to form realistic network models.
Since our cultural emergence, Homo sapiens has been locked in a state of competitive survival against its own individuals, via Machiavellian intelligence, but broadly speaking, rationality – dependence on rational thought processes as a basis for adaption – just brings us closer to the machine learning of robots, rather than conscious volition. Steve’s representation of the mechanical aspects in the basal ganglia in Grossberg (2015) gives a good representation of how living neurosystems adaptively evolve to make the mechanical aspect of the neural correlate of conscious volition possible, but it says little about how we actually survive the tiger’s pounce, let alone the ultimate subtleties of human political intrigue, when the computational factor are ambiguous.. Likewise decision theory or prospect theory, as noted in Wikipedia, tells us only a relatively obvious asymmetric sigmoidal function describing how risk aversion helps us survive, essentially because being eaten rates more decisively in the cost stakes than any single square meal as a benefit.
Because proving physical causal closure of the universe in the context of brain dynamics is impossible to practically achieve in the quantum universe, physical materialism is itself not a scientific concept, so all attempts to model and understand conscious volition remain open and will continue to do so. The hard problem of consciousness is not a division between science and philosophy as Steve suggests in his (2021) book, but our very oracle of cosmological existence.
The Readiness Potential and its Critics
Challenging the decision-making role of consciousness, Libet (1983, 1989) asked volunteers to flex a finger or wrist. When they did, the movements were preceded by a dip in the brain signals being recorded, called the "readiness potential". He interpreted this RP a few tenths of a second before the volunteers said they had decided to move, as the brain preparing for movement. Libet concluded that unconscious neural processes determine our actions before we are ever aware of making a decision. Since then, others have quoted the experiment as evidence that free will is an illusion.
Articulating a theory heavily dependent on this highly flawed notion, Budson et al. (2022) claim all the brain’s decision-making procedures are unconscious, but followed half a second later by conscious experience that is just a memory-based constructive representation of future outcomes. According to the researchers, this theory is important because it explains that all our decisions and actions are actually made unconsciously, although we fool ourselves into believing that we consciously made them:
In a nutshell, our theory is that consciousness developed as a memory system that is used by our unconscious brain to help us flexibly and creatively imagine the future and plan accordingly. What is completely new about this theory is that it suggests we don’t perceive the world, make decisions, or perform actions directly. Instead, we do all these things unconsciously and then—about half a second later—consciously remember doing them. We knew that conscious processes were simply too slow to be actively involved in music, sports, and other activities where split-second reflexes are required. But if consciousness is not involved in such processes, then a better explanation of what consciousness does was needed.
But this notion is itself a delusion. The conscious brain has evolved to be able to co-opt very fast subconscious processes to orchestrate in real time, highly accurate, innovative conscious responses, which the agent is fully aware of exercising in real time. The evidence is that conscious control of subconscious fast processing, e.g. via insular von-Economo neurons, and basal ganglia, occurs in parallel in real time. Tennis could not be played if the players' conscious reactions were half a second behind the ball. They could not represent, or accurately respond to the actual dynamics.
Libet’s claim has been undermined by more recent studies. Instead of letting volunteers decide when to move, Trevena and Miller (2010) asked them to wait for an audio tone before deciding whether to tap a key. If Libet's interpretation were correct, the RP should be greater after the tone when a person chose to tap the key. While there was an RP before volunteers made their decision to move, the signal was the same whether or not they elected to tap. Miller concludes that the RP may merely be a sign that the brain is paying attention and does not indicate that a decision has been made. They also failed to find evidence of subconscious decision-making in a second experiment. This time they asked volunteers to press a key after the tone, but to decide on the spot whether to use their left or right hand. As movement in the right limb is related to the brain signals in the left hemisphere and vice versa, they reasoned that if an unconscious process is driving this decision, where it occurs in the brain should depend on which hand is chosen, but they found no such correlation.
Schurger and colleagues (2012) have a key explanation. Previous studies have shown that, when we have to make a decision based on sensory input, assemblies of neurons start accumulating evidence in favour of the various possible outcomes. The team reasoned that a decision is triggered when the evidence favouring one particular outcome becomes strong enough to tip the dynamics – i.e. when the neural noise generated by random or chaotic activity accumulates sufficiently so that its associated assembly of neurons crosses a threshold tipping point. The team repeated Libet's experiment, but this time if, while waiting to act spontaneously, the volunteers heard a click they had to act immediately. The researchers predicted that the fastest response to the click would be seen in those in whom the accumulation of neural noise had neared the threshold - something that would show up in their EEG as a readiness potential. In those with slower responses to the click, the readiness potential was indeed absent in the EEG recordings. "We argue that what looks like a pre-conscious decision process may not in fact reflect a decision at all. It only looks that way because of the nature of spontaneous brain activity.” Schurger and Uithol (2015) specifically note the evidence of a sensitively dependent butterfly effect (London et al. 2010) as a reason why nervous systems vary their responses on identical stimuli as an explanation for why it could be impossible to set out a deterministic decision making path from contributory systems to a conscious decision, supporting their stochastic accumulator model. Hans Liljenström (2021) using stochastic modelling concludes that if decisions have to be made fast, emotional processes and aspects dominate, while rational processes are more time consuming and may result in a delayed decision.
Alexander et al. (2016) establish the lack of linkage of the RP to motor activity:
“The results reveal that robust RPs occured in the absence of movement and that motor-related processes did not significantly modulate the RP. This suggests that the RP measured here is unlikely to reflect preconscious motor planning or preparation of an ensuing movement, and instead may reflect decision-related or anticipatory processes that are non-motoric in nature.”
More recently the actual basis coordinating a decision to act has been found to reside in slowly evolving dopamine modulation. When you reach out to perform an action, seconds before you voluntarily extend your arm, thousands of neurons in the motor regions of your brain erupt in a pattern of electrical activity that travels to the spinal cord and then to the muscles that power the reach. But just prior to this massively synchronised activity, the motor regions in your brain are relatively quiet. For such self-driven movements, a key piece of the “go” signal that tells the neurons precisely when to act has been revealed in the form of slow ramping up of dopamine in a region deep below the cortex which closely predicted the moment that mice would begin a movement — seconds into the future (Hamilos et al. 2021).
The authors imaged mesostriatal dopamine signals as mice decided when, after a cue, to retrieve water from a spout. Ramps in dopamine activity predicted the timing of licks. Fast ramps preceded early retrievals, slow ones preceded late ones. Surprisingly, dopaminergic signals ramped-up over seconds between the start-timing cue and the self-timed movement, with variable dynamics that predicted the movement/reward time on single trials. Steeply rising signals preceded early lick-initiation, whereas slowly rising signals preceded later initiation. Higher baseline signals also predicted earlier self-timed movements. Consistent with this view, the dynamics of the slowly evolving endogenous dopaminergic signals quantitatively predicted the moment-by-moment probability of movement initiation on single trials. The authors propose that ramping dopaminergic signals, likely encoding dynamic reward expectation, can modulate the decision of when to move.
Slowly varying neuromodulatory signals could allow the brain to adapt to its environment. Such flexibility wouldn’t be afforded by a signal that always led to movement at the exact same time. Allison Hamilos notes: “The animal is always uncertain, to some extent, about what the true state of the world is. You don’t want to do things the same way every single time — that could be potentially disadvantageous.”
This introduces further complexity into the entire pursuit of Libet's readiness potential, which is clearly not itself the defining event, which rather is at first call concealed in a slowly varying dopamine modulation, which in itself does not determine the timing of the event except on a probabilistic basis. Furthermore the striatum itself is a gatekeeper in the basal ganglia for coordinating the underlying conscious decision to act and not the conscious decision itself.
Catherine Reason (2016), drawing on Caplain (1996, 2000) and Luna (2016), presents an intriguing logical proof that computing machines, and by extension physical systems, can never be certain if they possess conscious awareness, undermining the principal of computational equivalence (Wolfram 2002, 2021):
An omega function is any phi-type function which can be performed, to within a quantified level of accuracy, by some
conscious system. A phi-type function is any mappng which associates the state of some system with the truth value of some proposition. This significance of this is that it can be shown that no purely physical system can perform any phi-type function to within any quantified level of accuracy, if that physical system is required to be capable of humanlike reasoning.
The proof is as follows: Let us define a physical process as some process whose existence is not dependent on some observation of that process. Now let X be the set of all physical processes necessary to perform any phi-type function. Since the existence of X is not dependent on any given observation of X, it is impossible to be sure empirically of the existence of X. If it is impossible to be sure of the existence of X, then it is impossible to be sure of the accuracy of X. If it is impossible to be sure of the accuracy of X, then it is impossible to be sure that X correctly performs the phi-type function it is supposed to perform. Since any system capable of humanlike reasoning can deduce this, it follows that no physical system capable of humanlike reasoning can perform any phi-type function without becoming inconsistent.
Counterintuitively, this implies that human consciousness is associated with a violation of energy conservation. It also provides another objection to Libet:
“even if the readiness potential can be regarded as a predictor of the subject’s decision in a classical system, it cannot necessarily be regarded as such in a quantum system. The reason is that the neurological properties underlying the readiness potential may not actually have determinate values until the subject becomes consciously aware of their decision”.
In subsequent papers (Reason 2019, Reason & Shah 2021) she expands this argument:
I identify a specific operation which is a necessary property of all healthy human conscious individuals — specifically the operation of self-certainty, or the capacity of healthy conscious humans to “know” with certainty that they are conscious. This operation is shown to be inconsistent with the properties possible in any meaningful definition of a physical system.
In an earlier paper, using a no-go theorem, it was shown that conscious states cannot be comprised of processes that are physical in nature (Reason, 2019). Combining this result with another unrelated work on causal emergence in physical systems (Hoel, Albantakis and Tononi, 2013), we show in this paper that conscious macrostates are not emergent from physical systems and they also do not supervene on physical microstates.
In a counterpoint to this Travers et al. (2020) suggest the RP is associated with learning and thus reflects motor planning or temporal expectation, but neither planning nor expectation inform about the timing of a decision to act:
“Participants learned through trial and error when to make a simple action. As participants grew more certain about when to act, and became less variable and stochastic in the timing of their actions, the readiness potential prior to their actions became larger in amplitude. This is consistent with the proposal that the RP reflects motor planning or temporal expectation. … If the RP reflects freedom from external constraint, its amplitude should be greater early in learning, when participants do not yet know the best time to act. Conversely, if the RP reflects planning, it should be greater later on, when participants have learned, and know in advance, the time of action. We found that RP amplitudes grew with learning, suggesting that this neural activity reflects planning and anticipation for the forthcoming action, rather than freedom from external constraint.”
Fifel (2018) reviewing the state of the current research described the following picture:
Results from Emmons et al. (2017) suggest that such ramping activity en- codes self-monitored time intervals. This hypothesis is particularly pertinent given that self-monitoring of the passing of time by the experimental subjects is intrinsic to the Libet et al. (1983) experiment. Alternatively, although not mutually exclusive, RP might reflect general anticipation (i.e., the conscious experience that an event will soon occur) (Alexander et al., 2016) or simply background neuronal noise (Schurger et al., 2016). Future studies are needed to test these alternatives. … Consequently, we might conclude that: Neuroscience may in no way interfere with our first-person experience of the will, it can in the end only describe it ... it leaves everything as it is.
The difficulty of the hard problem, which remains unresolved 26 years later, is also tied to the likewise unresolved problem of assumed causal closure of the universe in the context of the brain at the basis of pure materialistic neuroscience. Until it is empirically confirmed it remains simply a matter of opinion that has grown into a belief system academically prejudiced against hypotheses not compliant with the physical materialistic weltanshauung.
While some neuroscientists (Johnson 2020) imply the hard problem is not even a scientific question, the neuroscience concept of causal closure (Chalmers 2015) based on classical causality, or quantum correspondence to it, not only remains empirically unverified in the light of Libet, Schurger and others, but it is unclear that a convincing empirical demonstration is even possible, or could be, given the fact that neuronal feedback processes span all scales from the organism to the quantum uncertain realm and the self-organised criticality of brain dynamics. Finally, it is in manifest conflict with all empirical experience of subjective conscious volitional intent universal to sane human beings.
As Barnard Baars commented in conversation:
I don't think science needs to, or CAN prove causal closure, because what kind of evidence will prove that? We don't know if physics is "causally closed," and at various times distinguished physicists think they know the answer, but then it breaks down. The Bertrand Russell story broke down, and the Hilbert program in math, and ODEs, and the record is not hopeful on final solutions showing a metaphysically closed system .
The status of the neuroscience perspective of causal closure has led to an ongoing debate about the efficacy of human volition and the status of free will (Nahamias 2008, Mele, 2014), however Joshua Shepherd (2017) points out, that the neuroscientific threat to free will has not been causally established, particularly in the light of Schurger et al. (2015).
For this reason, in treating the hard problem and volitional intent, I will place the onus on proof on materialism to demonstrate itself and in defence of volition have simply outlined notable features of central nervous processing, consistent with an in principle capacity to operate in a quantum-open state of seamless partial causal closure involving subjectively conscious efficacy of volitional will physically in decision-making (in the brain) and behaviour (in the world). From this point of view, efficacy of volition is itself a validated empirical experience which is near universal to sane conscious humans, thus negating causal closure by veridical affirmation in the framework of symbiotic existential cosmology, where empirical experience has equally valid cosmological status to empirical observation.
Libet’s experiment purported to demonstrate an inconsistency, by implying the brain had already made a decision before the conscious experience of it, but Trevena and Miller and Schurger’s team have deprecated this imputation.
Hopeful Monster 1: Virtual Machines v Cartesian Theatres
Reductionistic descriptions attempting to explain subjective experience objectively frequently display similar pitfalls to creationist descriptions of nature, and those in Biblical Genesis, which project easy, familiar concepts, such as human manufacture breath, or verbal command onto the natural universe. In his reductionist account in “Consciousness Explained” Daniel Dennett (1991) cites his “multiple drafts” model of brain processing, as a case of evolutionary competition among competing neural assemblies, lacking overall coherence, thus bypassing the need for subjective consciousness. This exposes a serious problem of conceptual inadequacy with reductionism. Daniel is here writing his book using the same metaphors as the very activities he happens to be using – the message is thus the medium. He can do this as a subjectively conscious being only by suppressing the significance of virtually every form of coherent conscious experience around him, subjugating virtually all features of his conscious existence operating for 100% of his conscious life, in favour of a sequence of verbal constructs having little more explanatory value than a tautology. This is what I call the psychosis of reductionistic materialism, which is shared by many AI researchers and cognitive scientists.
Despite describing the mind as a virtual machine, Dennett & KInsbourne (1995) do concede a conscious mind exists at least as an observer:
“Wherever there is a conscious mind, there is a point of view. A conscious mind is an observer, who takes in the information that is available at a particular (roughly) continuous sequence of times and places in the universe. ... It is now quite clear that there is no single point in the brain where all information funnels in, and this fact has some far from obvious consequences.”
But neuroscience has long ceased talking about a single point or single brain locus responsible for consciousness, which is associated with coherent “in phase” activity as a whole. Nevertheless Dennett attempts to mount a lethal attack on any coherent manifestation of subjectivity, asserting there is no single, constitutive "stream of consciousness”:
“The alternative, Multiple Drafts model holds that whereas the brain events that discriminate various perceptual contents are distributed in both space and time in the brain, and whereas the temporal properties of these various events are determinate, none of these temporal properties determine subjective order, since there is no single, constitutive "stream of consciousness" but rather a parallel stream of conflicting and continuously revised contents” (Dennett & KInsbourne (1995).
“There is no single, definitive "stream of consciousness," because there is no central Headquarters, no Cartesian Theatre where "it all comes together" for the perusal of a Central Meaner. Instead of such a single stream (however wide), there are multiple channels in which specialist circuits try, in parallel pandemoniums, to do their various things, creating Multiple Drafts as they go. Most of these fragmentary drafts of "narrative" play short-lived roles in the modulation of current activity but some get promoted to further functional roles, in swift succession, by the activity of a virtual machine in the brain. The seriality of this machine (its "von Neumannesque" character) is not a "hard-wired" design feature, but rather the upshot of a succession of coalitions of these specialists.” (Dennett 1991)
However we know and shall discuss in the context of the default mode network in the context of psychedelics, the balance between top-down processes of control and integration, against just such a flood of competing regional bottom-up excitations, which become more able to enter consciousness, because of lowered barriers under the drug.
Yet the ghost Dennett claims to have crushed
just keeps coming back to haunt him:
“Cartesian materialism is the view that there is a crucial finish line or boundary somewhere in the brain, marking a place where the order of arrival equals the order of "presentation" in experience because what happens there is what you are conscious of. ... Many theorists would insist that they have explicitly rejected such an obviously bad idea. But ... the persuasive imagery of the Cartesian Theater keeps coming back to haunt us—laypeople and scientists alike—even after its ghostly dualism has been denounced and exorcized.”
Fig 84: Baars’ (1997) view of the Cartesian theatre of consciousness has genuine explanatory power about the easy problem of the relation between peripheral unconscious processing and integrated coherent states associated with consciousness.
Bernard Baars’ (1997) global workspace theory, in the form of the actors in the Cartesian theatre of consciousness, is creatively provocative of the psyche, and concedes a central role for consciousness. His approach suggests that consciousness is associated with the whole brain, in integrated coherent activity and is thus a property of the brain as a whole functioning entity, in relation to global workspace, rather than arising from specific subsystems.
Furthermore, the approach rather neatly identifies the distinction between unconscious processing and conscious experience, in the spotlight of attention, accepts conscious experience as a central arena consistent with whether a given dynamic is confined to asynchronous regional activity or is part of a coherent global response. But again this description is an imaginative representation of Descartes’ homunculus in the guise of a Dionysian dramatic production, so it is also a projection onto subjective consciousness, albeit a more engaging one.
Lenore and Manuel Blum (2021) have developed a theoretical model of conscious awareness designed in relation to Baars' global workspace theory that applies as much to a computer as an organism:
Our view is that consciousness is a property of all properly organized computing systems, whether made of flesh and blood or metal and silicon. With this in mind, we give a simple abstract substrate-independent computational model of consciousness. We are not looking to model the brain nor to suggest neural correlates of consciousness, interesting as they are. We are looking to understand consciousness and its related phenomena.
Essentially the theory builds on the known feedbacks between peripheral unconscious processing and short term memory and the spotlight of conscious attention, paraphrasing these in purely computational terms, utilising a world model that is updated, notions corresponding to "feelings" and even "dream creation", in which a sleep processor alters the modality of informational chunking.
While it is possible to conceive of such analogous models it remains extremely unlikely that any such computational model can capture the true nature of subjective consciousness. By contrast with a Turing machine which operates discretely and serially on a single mechanistic scale, biological neurosystems operate continuously and discretely on fractal scales from the quantum level through molecular, subcellular dynamics up to global brains states, so it remains implausible in the extreme that such computational systems however complex in structural design can replicate organismic subjective consciousness. The same considerations apply to artificial neural net designs which lack the fractal edge of chaos dynamic of biological neurosystems.
Another discovery pertinent here (Fernandino et al. (2022) is that a careful neuroscientific study has found that lexical semantic information can be reliably decoded from a wide range of heteromodal cortical areas in the frontal, parietal, and temporal cortex, but that in most of these areas, they found a striking advantage for experience-based representational structures (i.e., encoding information about sensory-motor, affective, and other features of phenomenal experience), with little evidence for independent taxonomic or distributional organisation. This shows that experience is the foundational basis for conceptual and cognitive thought, giving it a primary universal status over rational or verbal thought.
Consciousness and Broad Integrated Processing: The Global Neuronal Workspace (GNW) model
Stanislas Dehaene and Jean-Pierre Changeux (2008, 2011) have combined experimental studies and theoretical models, including Baars' global workspace theory to address the challenge of establishing a causal link between subjective conscious experience and measurable neuronal activity in the form of the the Global Neuronal Workspace (GNW) model according to which conscious access occurs when incoming information is made globally available to multiple brain systems through a network of neurons with long-range axons densely distributed in prefrontal, parieto-temporal, and cingulate cortices.
Converging neuroimaging and neurophysiological data, acquired during minimal experimental contrasts between conscious and nonconscious processing, point to objective neural measures of conscious access: late amplification of relevant sensory activity, long-distance cortico-cortical synchronization at beta and gamma frequencies, and ‘ignition’ i.e. "lighting up" of a large-scale prefronto-parietal network. By contrast, as shown in fig 86, states of reduced consciousness have large areas of cortical metabolic deactivation.
Fig 85: Both fMRI (1) and (2) EEG/MEG data, show broad activation across diverse linked cortical regions, when non-conscious processing rises to the conscious level. Likewise local feed forward propagation (3) leads to reverberating cortical connections. These influences are combined in the GRW model (4) in which Baars’ global workspace theatre becomes a more precisely defined model attempting to solve several of the easier problems of consciousness into a globally resonant network theory.
In conclusion, the authors look ahead to the quest of understanding the conscious brain and what it entails:
The present review was deliberately limited to conscious access. Several authors argue, however, for additional, higher-order concepts of consciousness. For Damasio and Meyer (2009), core consciousness of incoming sensory information requires integrating it with a sense of self (the specific subjective point of view of the perceiving organism) to form a representation of how the organism is modified by the information; extended consciousness occurs when this representation is additionally related to the memorized past and anticipated future (see also Edelman, 1989). For Rosenthal (2004), a higher-order thought, coding for the very fact that the organism is currently representing a piece of information, is needed for that information to be conscious. Indeed, metacognition, or the ability to reflect upon thoughts and draw judgements upon them is often proposed as a crucial ingredient of consciousness. In humans, as opposed to other animals, consciousness may also involve the construction of a verbal narrative of the reasons for our behavior (Gazzaniga et al., 1977).
Fig 86: Top: Conscious brain states are commonly associated with phase correlated global cortical activity. Conscious brain activity in healthy controls is contrasted with diminished cortical connectivity of excitation in unaware and minimally conscious states (Demertzi et al. 2019). Bottom: Reduced metabolism during loss of consciousness (Dehaene & Changeux J 2011).
In the future, as argued by Haynes (2009), the mapping of conscious experiences onto neural states will ultimately require not only a neural distinction between seen and not-seen trials, but also a proof that the proposed conscious neural state actually encodes all the details of the participant’s current subjective experience. Criteria for a genuine one-to-one mapping should include verifying that the proposed neural state has the same perceptual stability (for instance over successive eye movements) and suffers from the same occasional illusions as the subject’s own report.
However, decoding the more intermingled neural patterns expected from PFC and other associative cortices is clearly a challenge for future research. Another important question concerns the genetic mechanisms that, in the course of biological evolution, have led to the development of the GNW architecture, particularly the relative expansion of PFC, higher associative cortices, and their underlying long-distance white matter tracts in the course of hominization. Finally, now that measures of conscious processing have been identified in human adults, it should become possible to ask how they transpose to lower animal species and to human infants and fetuses.
In "A better way to crack the brain”, Mainen, Häusser & Pouget (2016) cite novel emerging technologies such as optogenetics as tools likely to eclipse the overriding emphasis on electrical networking data, but at the same time illustrate the enormity of the challenge of neuroscience attempting to address consciousness as a whole.
Some sceptics point to the teething problems of existing brain initiatives as evidence that neuroscience lacks well-defined objectives, unlike high-energy physics, mathematics, astronomy or genetics.
In our view, brain science, especially systems neuroscience (which tries to link the activity of sets of neurons to behaviour) does not want for bold, concrete goals. Yet large-scale initiatives have tended to set objectives that are too vague and not realistic, even on a ten-year timescale.
Fig 8: Optogenetic images of pyramidal cells in a rodent cortex.
Several advances over the past decade have made it vastly more tractable to solve funda- mental problems such as how we recognize objects or make decisions. Researchers can now monitor and manipulate patterns of activity in large neuronal ensembles, thanks to new technologies in molecular engineering, micro-electronics and computing. For example, a combination of advanced optical imaging and optogenetics can now read and write patterns of activity into populations of neurons. It is also possible to relate firing patterns to the biology of the neurons being recorded, including their genetics and connectivity.
Several advances over the past decade have made it vastly more tractable to solve fundamental problems such as how we recognize objects or make decisions. Researchers can now monitor and manipulate patterns of activity in large neuronal ensembles, thanks to new technologies in molecular engineering, micro- electronics and computing. For example, a combination of advanced optical imaging and optogenetics can now read and write patterns of activity into populations of neurons . It is also possible to relate firing patterns to the biology of the neurons being recorded, including their genetics and connectivity.
However none of these are coming even close to stitching together a functional view of brain processing that comes anywhere near to solving the hard problem or even establishing causal closure of the universe in the context of brain function, given the extreme difficulty of verifying classical causality in every brain process and the quantum nature of all brain processes at the molecular level. Future prospects for solving the hard problem via the easy ones thus remain unestablished.
Epiphenomenalism, Conscious Volition and Free Will
Thomas Kuhn (1922–1996) is perhaps the most influential philosopher of science of the twentieth century. His book “The Structure of Scientific Revolutions” (Kuhn 1962) is one of the most cited academic books of all time. A particularly important part of Kuhn’s thesis focuses upon the consensus on exemplary instances of scientific research. These exemplars of good science are what Kuhn refers to when he uses the term ‘paradigm’ in a narrower sense. He cites Aristotle’s analysis of motion, Ptolemy’s computations of plantery positions, Lavoisier’s application of the balance, and Maxwell’s mathematization of the electromagnetic field as paradigms (ibid, 23). According to Kuhn the development of a science is not uniform but has alternating ‘normal’ and ‘revolutionary’ (or ‘extraordinary’) phases in which paradigm shifts occur.
Rejecting a teleological view of science progressing towards the truth, Kuhn favours an evolutionary view of scientific progress (1962/1970a, 170–3). The evolutionary development of an organism might be seen as its response to a challenge set by its environment. But that does not imply that there is some ideal form of the organism that it is evolving towards. Analogously, science improves by allowing its theories to evolve in response to puzzles and progress is measured by its success in solving those puzzles; it is not measured by its progress towards to an ideal true theory. While evolution does not lead towards ideal organisms, it does lead to greater diversity of kinds of organism. This is the basis of a Kuhnian account of specialisation in science in which the revolutionary new theory that succeeds in replacing another that is subject to crisis, may fail to satisfy all the needs of those working with the earlier theory. One response to this might be for the field to develop two theories, with domains restricted relative to the original theory (one might be the old theory or a version of it).
Free will is the notion that we can make real choices which are partially or completely independent of antecedent conditions – "the power of acting without the constraint of necessity or fate; the ability to act at one's own discretion", in the context of the given circumstances. Determinism denies this and maintains that causation is operative in all human affairs. Increasingly, despite the discovery of quantum uncertainty, scientists argue that their discoveries challenge the existence of free will. Studies indicate that informing people about such discoveries can change the degree to which they believe in free will and subtly alter their behaviour, leading to a social erosion of human agency, personal and ethical responsibility.
Philosophical analysis of free will divides into two opposing responses. Incompatibilists claim that free will and determinism cannot coexist. Among incompatibilists, metaphysical libertarians, who number among them Descartes, Bishop Berkeley and Kant, argue that humans have free will, and hence deny the truth of determinism. Libertarianism holds onto a concept of free will that requires the agent to be able to take more than one possible course of action under a given set of circumstances, some arguing that indeterminism helps secure free will, others arguing that free will requires a special causal power, agent-causation. Instead, compatibilists argue that free and responsible agency requires the capacities involved in self-reflection and practical deliberation; free will is the ability to make choices based on reasons, along with the opportunity to exercise this ability without undue constraints (Nadelhoffer et al. 2014). This can make rational acts or decisions compatible with determinism.
Our concern here is thus not with responsible agency, which may or may not be compatible with determinism, but affirming the existence of agency not causally determined by physical processes in the brain. Epiphenomenalists accept that subjective consciousness exists, as an internal model of reality constructed by the brain to give a global description of the coherent brain processes involved in perception attention and cognition, but deny the volitional will over our actions that is central to both reasoned and creative physical actions. This invokes a serious doubt that materialistic neuroscience can be in any way consistent with any form of consciously conceived ethics, because invoking moral or ethical reasoning is reduced to forms of aversive conditioning, consistent with behaviouralism, and Pavlov’s dogs, subjectively rationalised by the subject as a reason. This places volition as being a delusion driven by evolutionary compensation to mask the futility of any subjective belief in organismic agency over the world.
Defending subjective volitional agency thus depends centrally on the innovative ability of the subjective conscious agent to generate actions which lie outside the constraints of determined antecedents, placing a key emphasis on creativity and idiosyncracy, amid physical uncertainty, rather than cognitive rationality, as reasons are themselves subject to antecedents.
Bob Doyle notes that in the first two-stage model of free-will, William James (1884) proposed that indeterminism is the source for what James calls "alternative possibilities" and "ambiguous futures." The chance generation of such alternative possibilities for action does not in any way limit his choice to one of them. For James chance is not the direct cause of actions. James makes it clear that it is his choice that “grants consent” to one of them. In 1884, James asked some Harvard Divinity School students to consider his choice for walking home after his talk:
What is meant by saying that my choice of which way to walk home after the lecture is ambiguous and matter of chance?...It means that both Divinity Avenue and Oxford Street are called but only one, and that one either one, shall be chosen.
James was thus the first thinker to enunciate clearly a two-stage decision process, with chance in a present time of random alternatives, leading to a choice which grants consent to one possibility and transforms an equivocal ambiguous future into an unalterable and simple past. There is a temporal sequence of undetermined alternative possibilities followed by an adequately determined choice where chance is no longer a factor. James also asked the students to imagine his actions repeated in exactly the same circumstances, a condition which is regarded today as one of the great challenges to libertarian free will. James anticipates much of modern physical theories of multiple universes:
Imagine that I first walk through Divinity Avenue, and then imagine that the powers governing the universe annihilate ten minutes of time with all that it contained, and set me back at the door of this hall just as I was before the choice was made. Imagine then that, everything else being the same, I now make a different choice and traverse Oxford Street. You, as passive spectators, look on and see the two alternative universes,--one of them with me walking through Divinity Avenue in it, the other with the same me walking through Oxford Street. Now, if you are determinists you believe one of these universes to have been from eternity impossible: you believe it to have been impossible because of the intrinsic irrationality or accidentality somewhere involved in it. But looking outwardly at these universes, can you say which is the impossible and accidental one, and which the rational and necessary one? I doubt if the most ironclad determinist among you could have the slightest glimmer of light on this point.
Henri Poincaré speculated on how his mind worked when solving mathematical problems. He had the critical insight that random combinations and possibilities are generated, some in an unconsciously, then they are selected among, perhaps initially also by an unconscious process, but then by a definite conscious process of validation:
It is certain that the combinations which present themselves to the mind in a kind of sudden illumination after a somewhat prolonged period of unconscious work are generally useful and fruitful combinations… all the combinations are formed as a result of the automatic action of the subliminal ego, but those only which are interesting find their way into the field of consciousness… A few only are harmonious, and consequently at once useful and beautiful, and they will be capable of affecting the geometrician's special sensibility I have been speaking of; which, once aroused, will direct our attention upon them, and will thus give them the opportunity of becoming conscious… In the subliminal ego, on the contrary, there reigns what I would call liberty, if one could give this name to the mere absence of discipline and to disorder born of chance.
The Two-Stage Model of Arthur Compton championed the idea of human freedom based on quantum uncertainty and invented the notion of amplification of microscopic quantum events to bring chance into the macroscopic world. Years later, he clarified the two-stage nature of his idea in an Atlantic Monthly article in 1955:
A set of known physical conditions is not adequate to specify precisely what a forthcoming event will be. These conditions, insofar as they can be known, define instead a range of possible events from among which some particular event will occur. When one exercises freedom, by his act of choice he is himself adding a factor not supplied by the physical conditions and is thus himself determining what will occur. That he does so is known only to the person himself. From the outside one can see in his act only the working of physical law. It is the inner knowledge that he is in fact doing what he intends to do that tells the actor himself that he is free.
At first Karl Popper dismissed quantum mechanics as being no help with free will, but later describes a two-stage model paralleling Darwinian evolution, with genetic mutations being probabilistic and involving quantum uncertainty.
In 1977 he gave the first Darwin Lecture "Natural Selection and the Emergence of Mind". In it he said he had changed his mind (a rare admission by a philosopher) about two things. First he now thought that natural selection was not a "tautology" that made it an unfalsifiable theory. Second, he had come to accept the random variation and selection of ideas as a model of free will. The selection of a kind of behavior out of a randomly offered repertoire may be an act of choice, even an act of free will. I am an indeterminist; and in discussing indeterminism I have often regretfully pointed out that quantum indeterminacy does not seem to help us;1 for the amplification of something like, say, radioactive disintegration processes would not lead to human action or even animal action, but only to random movements. This is now the leading two-stage model of free will. I have changed my mind on this issue. A choice process may be a selection process, and the selection may be from some repertoire of random events, without being random in its turn. This seems to me to offer a promising solution to one of our most vexing problems, and one by downward causation.
Fig 88:
Diagram from Descartes' Treatise of Man (1664),
showing the formation of inverted retinal images in the eyes, and the
transmission of these images, via the nerves so as to form a single,
re-inverted image (an idea)
on the surface of the pineal gland.
As a young man, Descartes had had a mystical experience in a sauna on the Danube: three dreams, which he interpreted as a message telling him to come up with a theory of everything and on the strength of this, dedicated his life to philosophy, leading to his iconic quote – Cogito ergo sum “I think therefore I am” – leading to Cartesian dualism, immortalised in the homunculus. This means that, in a sense, the Cartesian heritage of dualism is a genuine visionary attempt on Descartes’ part, to come to terms with his own conscious experience in terms of his cognition, in distinction from the world around him. Once the separation invoked by the term dualism is replaced by complementarity, we arrive at Darwinian panpsychism.
Experior, ergo sum, experimur, ergo sumus.
I experience therefore I am, we experience therefore we are!
The traditional view of subjective consciousness stemming from Thomas Huxley is that of epiphenomenalism – the view that mental events are caused by physical events in the brain, but have no effects upon any physical events.
The
way paradigm shifts can occur can be no more starkly illustrated than in the
way in which epiphenomenalism,
behaviourism and pure materialism, including reductionism came to dominate the
scientific view of reality and the conscious mind.
Fig 89: A
decapitated frog uses its right foot to try to remove burning acid
but when it
is cut off it uses its left, although having no brain.
Huxley (1874) held the view, comparing mental events to a steam whistle that contributes nothing to the work of a locomotive. William James (1879), rejected this view, characterising epiphenomenalists’ mental events as not affecting the brain activity that produces them “any more than a shadow reacts upon the steps of the traveller whom it accompanies” – thus turning subjective consciousness from active agency to being a mere passenger. Huxley’s essay likewise compares consciousness to the sound of the bell of a clock that has no role in keeping the time, and treats volition simply as a symbol in consciousness of the brain-state cause of an action. Non-efficacious mental events are referred to in this essay as “collateral products” of their physical causes.
Klein (2021), in continuing paragraphs, notes that the story begins with Eduard Pflüger’s 1853 experiments showing that some decapitated vertebrates exhibit behaviour it is tempting to call purposive. The results were controversial because purposive behaviour had long been regarded as a mark of consciousness. Those who continued to think it was such a mark had to count a pithed frog – and presumably, a chicken running around with its head cut off – as conscious. You can see such ideas echoing today in theories such as Solms and Friston's (2018) brain-stem based model of consciousness.
But this view opened the way for epiphenomenalism: just as pithed frogs seem to act with purpose even though their behaviour is not really guided by phenomenal consciousness, so intact human behaviours may seem purposive without really being guided by phenomenal consciousness.
Fig 90: Representation of consciousness from the seventeenth
century by Robert Fludd, an English Paracelsian physician.
Descartes had famously contended that living animals might be like machines in the sense of being non-conscious organisms all of whose behaviours are produced strictly mechanistically. Those in the seventeenth and eighteenth century who adopted a broadly Cartesian approach to animal physiology are often called ‘mechanists’, and their approach is typically contrasted with so-called ‘animists’. What separated the two groups was the issue of whether and to what extent the mechanical principles of Newton and Boyle could account for the functioning of living organisms.
Even for those more inclined towards mechanism, though, animistic tendencies still underlay much physiological thinking throughout the early modern period. For instance, Giovanni Borelli (1608–1679) had developed a mechanistic account of how the heart pumps blood. But even Borelli gave the soul a small but important role in this motion. Borelli contended that ‘the unpleasant accumulation of blood in the heart of the preformed embryo would be perceived by the “sentient faculty” (facultas sensitiva) of the soul through the nerves, which would then prompt the ventricle to contract’. Only after the process was thus initiated would the circulation continue mechanistically, as a kind of physical, acquired habit. But the ultimate cause of this motion was the soul.
Now, suppose one accepts purposive behaviour as a mark of consciousness (or sensation, or volition, or all of these). Then one arrives at a surprising result indeed – that the brainless frog, properly prepared, remains a conscious agent. Of course, there is a lot riding on just what is meant by ‘consciousness’, ‘sensation’, and ‘volition’. Pflüger himself often wrote about the decapitated frog’s supposed ‘consciousness’ (Bewusstsein), but was rather loose and poetic in spelling out what that term was to mean. Still, his general thesis was clear enough: that in addition to the brain, the spinal cord is also an organ that independently produces consciousness. One controversial implication is that consciousness itself may be divisible (and so literally extended; see Huxley, 1870 5–6) – it may exist in various parts of the nervous system, even in a part of the spinal cord that has been divided from the brain (Fearing 1930 162–3).
Lotze’s thought was that these behaviours seem purposive only because they are complex. If we allow that the nervous system can acquire complex, reflexive actions through bodily learning, then we can maintain that these behaviours are mechanically determined, and not guided or accompanied by any phenomenal consciousness. The difficulty with this response is that pithed frogs find ways to solve physical challenges they cannot be supposed to have faced before being pithed. For instance, suppose one places a pithed frog on its back, holds one leg straight up, perpendicular to the body, and irritates the leg with acid. The pithed frog will then raise the other leg to the same, odd position so as to be able to wipe away the irritant (Huxley 1870 3). Huxley also reports that a frog that is pithed above the medulla oblongata (but below the cerebellum) loses the ability to jump, even though the frog with the brain stem and cerebellum both intact is able to perform this action, at least in response to irritation. A frog pithed just below the cerebrum ‘can see, swallow, jump, and swim’, though still will typically move only if prompted by an outer stimulus (Huxley 1870 3–4).
Now what does Lewes mean by ‘sensation’ and ‘volition’?
Do what we will, we cannot altogether divest Sensibility of its psychological con- notations, cannot help interpreting it in terms of Consciousness; so that even when treating of sensitive phenomena observed in molluscs and insects, we always imagine these more or less suffused with Feeling, as this is known in our own conscious states. (Lewes 1877 188–9)
He saw that one must first settle an important issue before it is possible to interpret these experiments. He wrote, “we have no proof, rigorously speaking, that any animal feels; none that any human being feels; we conclude that men feel, from certain external manifestations, which resemble our own, under feeling; and we conclude that animals feel – on similar grounds.”
Now, inasmuch as the actions of animals furnish us with our sole evidence for the belief in their feeling, and this evidence is universally considered as scientifically valid, it is clear that similar actions in decapitated animals will be equally valid; and when I speak of proof, it is in this sense. Spontaneity and choice are two signs which we all accept as conclusive of sensation and volition. (Lewes 1859 237–8).
Does Pflüger’s experiment prove that there is sensation or volition in the pithed frog? We cannot tell, Lewes suggests, until we first settle on some third-person-accessible mark of sensation and volition. And the marks Lewes proposes are spontaneity and choice.
For Lewes, every physiological change is in some sense sensory, and every physiological change thereby influences the ‘stream of Consciousness’, however slightly.
Thomas Huxley (1874) offered the most influential and provocative version of the conscious automaton theory in an address in Belfast. According to this view, ‘consciousness’ , synonymous with Lewes’ ‘sensation’ accompanies the body without acting on it, just as ‘the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery’. Conscious states are continually being caused by brain states from moment to moment, on this view, but are themselves causally inert. In other words, although Huxley accepted the existence of sensation, he rejected the existence of ‘volition’ (as Lewes had used that word). This is an early form of epiphenomenalism.
Pflüger and Lewes had indeed established the existence of purposive behaviour in pithed frogs, Huxley readily conceded (Huxley 1874 223). But since it is absurd (according to Huxley) to think the behaviour of brainless frogs is under conscious control, the correct lesson to draw from Pflüger and Lewes’ results was that purposive actions are not sufficient to establish volition. In fact, Huxley evidently was unwilling to accept the existence of any behavioural mark of either sensation or volition.
It must indeed be admitted, that, if any one think fit to maintain that the spinal cord below the injury is conscious, but that it is cut off from any means of making its consciousness known to the other consciousness in the brain, there is no means of driving him from his position by logic. But assuredly there is no way of proving it, and in the matter of consciousness, if in anything, we may hold by the rule, ‘De non apparentibus et de non existentibus eadem est ratio’ [‘what does not appear and what does not exist have the same evidence’].
(Huxley, 1874, 220)
The mechanist’s dilemma is the following ‘paradox’:
A: If one accepts any behavioural mark of sensation and volition, then the experimental data will force us to attribute sensation and volition to both decapitated and intact vertebrates alike.
B: If one rejects the existence of a behavioural mark, then one has no grounds for ascribing sensation or volition to either decapitated or intact vertebrates.
Huxley’s pronouncement piggybacks on the position he took in the mechanist’s dilemma. His claim that spinal consciousness cannot be observed amounts to the claim that such a consciousness cannot be observed first-personally. But that is the crux of the mechanist’s dilemma.
Huxley nevertheless was reverential of the contribution made by Rene Descartes in understanding the physiology of the brain and body:
The first proposition culled from the works of Descartes which I have to lay before you, is one which will sound very familiar. It is the view, which he was the first, so far as I know, to state, not only definitely, but upon sufficient grounds, that the brain is the organ of sensation, of thought, and of emotion-using the word "organ" in this sense, that certain changes which take place in the matter of the brain are the essential antecedents of those states of consciousness which we term sensation, thought and emotion. ... It remained down to the time of Bichat [150 years later] a question of whether the passions were or were not located in the abdominal viscera. In the second place, Descartes lays down the proposition that all movements of animal bodies are affected by a change in form. of a certain part of the matter of their bodies, to which he applies the general term of muscle.
The process of reasoning by which Descartes arrived at this startling conclusion is well shown in the following passage of the “Réponses:”– “But as regards the souls of beasts, although this is not the place for considering them, and though, without a general exposition of physics, I can say no more on this subject than I have already said in the fifth part of my Treatise on Method; yet, I will further state, here, that it appears to me to be a very remarkable circumstance that no movement can take place, either in the bodies of beasts, or even in our own, if these bodies have not in themselves all the organs and instruments by means of which the very same movements would be accomplished in a machine. So that, even in us, the spirit, or the soul, does not directly move the limbs, but only determines the course of that very subtle liquid which is called the animal spirits, which, running continually from the heart by the brain into the muscles, is the cause of all the movements of our limbs, and often may cause many different motions, one as easily as the other.
Descartes’ line of argument is perfectly clear. He starts from reflex action in man, from the unquestionable fact that, in ourselves, co-ordinate, purposive, actions may take place, without the intervention of consciousness or volition, or even contrary to the latter. As actions of a certain degree of complexity are brought about by mere mechanism, why may not actions of still greater complexity be the result of a more refined mechanism? What proof is there that brutes are other than a superior race of marionettes, which eat without pleasure, cry without pain, desire nothing, know nothing, and only simulate intelligence as a bee simulates a mathematician? ... Suppose that only the anterior division of the brain–so much of it as lies in front of the “optic lobes” – is removed. If that operation is performed quickly and skilfully, the frog may be kept in a state of full bodily vigour for months, or it may be for years; but it will sit unmoved. It sees nothing: it hears nothing. It will starve sooner than feed itself, although food put into its mouth is swallowed. On irritation, it jumps or walks; if thrown into the water it swims.
Klein (2018) notes that he crux of the paradigm shift was the competing research by the opposing groups and the way in which their research successes at the time led to success:
But by the time of the Lewes contribution from 1877, the question was no longer whether this one subset of muscular action could be accounted for purely mechanistically. Now, the question had become whether the mechan- istic approach to reflex action might be expanded to cover all muscular action. Lewes wrote that the ‘Reflex Theory’ had become a strategy where one attempted to specify ‘the elementary parts involved’ in every physiological function without ever appealing to ‘Sensation and Volition’ (Lewes, Problems of Life and Mind, 354).24
‘That the majority of physiological opinion by the close of the century was in favor of the position of Pflüger’s opponents seems certain’, Fearing writes. ‘Mechanistic physiology and psychology was firmly seated in the saddle’ (Fearing, 1930, 185).
The concept of a mechanistic reflex arc came to dominate not just physiology, but psychology too. The behaviourist B. F. Skinner, for example, wrote his 1930 doctoral dissertation on how to expand the account of reflex action to cover all behaviour, even the behaviour of healthy organisms. Through the innovations of people like Skinner and, before him, Pavlov, behaviourism would establish itself as the dominant research paradigm.
Cannon (1911, 38) gave no real argument for why students should not regard purposive movement as a mark of genuine volition (beyond a quick gesture at Lotze’s long-discredited retort to Pflüger). Without citing any actual experiments, Cannon simply reported, as settled scientific fact, that purposiveness does not entail intended action:
Purposive movements are not necessarily intended movements. It is probable that reaction directed with apparent purposefulness is in reality an automatic repetition of movements developed for certain effects in the previous experience of the intact animal. (ibid)
Schwartz et al. (2005) highlight the key role William James played in establishing the status of volitional will:
William James (1890 138) argued against epiphenomenal consciousness, by claiming that ‘The particulars of the distribution of consciousness, so far as we know them, points to its being efficacious.’ James (136) stated that 'consciousness is at all times primarily a selecting agency.’ It is present when choices must be made between different possible courses of action. ‘It is to my mind quite inconceivable that consciousness should have nothing to do with a business to which it so faithfully attends’.
These liabilities of the notion of epiphenomenal mind and consciousness lead many thinkers to turn to the alternative possibility that a person’s mind and stream of consciousness is the very same thing as some activity in their brain: mind and consciousness are ‘emergent properties’ of brains. A huge philosophical literature has developed arguing for and against this idea.
They cite Sperry, who adopted an identity theory approach which he claimed was monist, in invoking a top-down systems theoretic notion of the mind as an abstraction of certain higher-level brain processes:
The core ideas of the arguments in favour of an identity-emergent theory of mind and consciousness are illustrated by Roger Sperry’s (1992) example of a ‘wheel’. A wheel obviously does something: it is causally efficacious; it carries the cart. It is also an emergent property: there is no mention of ‘wheelness’ in the formulation of the laws of physics and ‘wheelness’ did not exist in the early universe; ‘wheelness’ emerges only under certain special conditions. And the macro-scopic wheel exercises ‘top-down’ control of its tiny parts. ... The reason that mind and consciousness are not analogous to ‘wheelness’, within the context of classic physics, is that the properties that characterize ‘wheelness’ are properties that are entailed, within the conceptual framework of classic physics, by properties specified in classic physics, whereas the properties that characterize conscious mental processes, namely the various ways these processes feel, are not entailed within the conceptual structure provided by classic physics, but by the properties specified by classic physics.
They quote James again in their theory of volition, based on the repeated application of attention to the issue at hand:
In the chapter on will, in the section entitled ‘Volitional effort is effort of attention’, James (1892 417) writes: “Thus we find that we reach the heart of our inquiry into volition when we ask by what process is it that the thought of any given action comes to prevail stably in the mind. ... The essential achievement of the will, in short, when it is most ‘voluntary,’ is to attend to a difficult object and hold it fast before the mind. Effort of attention is thus the essential phenomenon of will. ... Consent to the idea’s undivided presence, this is effort’s sole achievement. Everywhere, then, the function of effort is the same: to keep affirming and adopting the thought which, if left to itself, would slip away”.
Enshrining the concept of pure behaviourism, and reductionism more generally Gilbert Ryle (1949) claimed in “The Concept of Mind” that "mind" is "a philosophical illusion hailing from René Descartes, and sustained by logical errors and 'category mistakes' which have become habitual”. Ryle rejected Descartes' theory of the relation between mind and body, on the grounds that it approaches the investigation of mental processes as if they could be isolated from physical processes. According to Ryle, the classical theory of mind, or "Cartesian rationalism," makes a basic category mistake (a new logical fallacy Ryle himself invented), as it attempts to analyze the relation between "mind" and "body" as if they were terms of the same logical category. The rationalist theory that there is a transformation into physical acts of some purely mental faculty of "Will" or "Volition" is therefore a misconception because it mistakenly assumes that a mental act could be and is distinct from a physical act, or even that a mental world could be and is distinct from the physical world. This theory of the separability of mind and body is described by Ryle as "the dogma of the ghost in the machine.” However Ryle was not regarded as a philosophical behaviourist and writes that the "general trend of this book will undoubtedly, and harmlessly, be stigmatised as ‘behaviourist’."
Symbiotic Existential Cosmology, classes itself as ICAM interactively complementary aspect monism, rather than dualism. The Stanford Encyclopaedia of Philosophy definitions for dualism (Robinson 2023) are:
Genuine property dualism occurs when, even at the individual level, the ontology of physics is not sufficient to constitute what is there. The irreducible language is not just another way of describing what there is, it requires that there be something more there than was allowed for in the initial ontology. Until the early part of the twentieth century, it was common to think that biological phenomena (‘life’) required property dualism (an irreducible ‘vital force’), but nowadays the special physical sciences other than psychology are generally thought to involve only predicate dualism (that psychological or mentalistic predicates are (a) essential for a full description of the world and (b) are not reducible to physicalistic predicates). In the case of mind, property dualism is defended by those who argue that the qualitative nature of consciousness is not merely another way of categorizing states of the brain or of behaviour, but a genuinely emergent phenomenon.
Substance dualism: There are two important concepts deployed in this notion. One is that of substance, the other is the dualism of these substances. A substance is characterized by its properties, but, according to those who believe in substances, it is more than the collection of the properties it possesses, it is the thing which possesses them. So the mind is not just a collection of thoughts, but is that which thinks, an immaterial substance over and above its immaterial states.
In Stanford, Tanney (2022) notes Ryle’s category error critique was centrally about the assumed distinctness or separability of mind and body as “substances” in the context of absurdity of certain verbal sentence constructions:
When a sentence is (not true or false but) nonsensical or absurd, though its vocabulary is conventional and its grammatical construction is regular, we say that it is absurd because at least one ingredient expression in it is not of the right type to be coupled or to be coupled in that way with the other ingredient expression or expressions in it. Such sentences, we may say, commit type-trespasses or break type-rules. (1938, 178)
The category mistake Ryle identifies in “There is a mind and a body” or “there is a mind or a body” is less obvious. For it takes a fair bit of untangling to show that “mind” and “body” are different logical or grammatical types; a fact which renders the assertion of either the conjunction or the disjunction nonsensical.
Robinson (2023) further notes both the veridical affirmation of interactivity in everyday life and the unverifiability of physical causal closure:
Interactionism is the view that mind and body – or mental events and physical events – causally influence each other. That this is so is one of our common-sense beliefs, because it appears to be a feature of everyday experience. The physical world influences my experience through my senses, and I often react behaviourally to those experiences. My thinking, too, influences my speech and my actions. There is, therefore, a massive natural prejudice in favour of interactionism.
Causal Closure Most discussion of interactionism takes place in the context of the assumption that it is incompatible with the world's being 'closed under physics'. This is a very natural assumption, but it is not justified if causal overdetermination of behaviour is possible. There could then be a complete physical cause of behaviour, and a mental one. The problem with closure of physics may be radically altered if physical laws are indeterministic, as quantum theory seems to assert. If physical laws are deterministic, then any interference from outside would lead to a breach of those laws. But if they are indeterministic, might not interference produce a result that has a probability greater than zero, and so be consistent with the laws? This way, one might have interaction yet preserve a kind of nomological closure, in the sense that no laws are infringed. … Some argue that indeterminacy manifests itself only on the subatomic level, being cancelled out by the time one reaches even very tiny macroscopic objects: and human behaviour is a macroscopic phenomenon. Others argue that the structure of the brain is so finely tuned that minute variations could have macroscopic effects, rather in the way that, according to 'chaos theory', the flapping of a butterfly's wings in China might affect the weather in New York. (For discussion of this, see Eccles (1980), (1987), and Popper and Eccles (1977).) Still others argue that quantum indeterminacy manifests itself directly at a high level, when acts of observation collapse the wave function, suggesting that the mind may play a direct role in affecting the state of the world (Hodgson 1988; Stapp 1993).
Symbiotic Existential Cosmology does not assert “substance” dualism, as subjective conscious volition is not treated as a “substance”, in the way mind was in the manner of objective physical entities, in Ryle's complaint against Cartesian dualism. SEC invokes a unified Cosmos in which primal subjectivity and the objective universe are complementary mutually-interactive principles in a universe which is not causally closed and in which volitional will can act without causal conflict, through quantum uncertainty. Life is also subject to overdeterminism due to teleological influences such as autopoiesis, e.g. in the negentropic nature of life and evolution as self-organising far-from-equilibrium thermodynamic systems. The subjective aspect is fully compliant with determined physical boundary conditions of brain states , except in so far as subjective volition interacts with environmental quantum-derived uncertainty through quantum-sensitive unstable brain dynamics, forming a contextual filter theory of brain function on conscious experience, rather than a causally-closed universe determining ongoing brain states. Thus, no pure-subjective interactivity is required, as occurs in traditional forms of panpsychism, such as pan-proto- or cosmo-psychism.
The key counter to Ryle's complaint is that if I say in response to a received e-mail that the author has demonstrated through consciously intending to compose and send their response in physical form that "you have demonstrated that your subjective conscious volition has efficacy over the physical universe" this is not grammatically, semantically, or categorically absurd, but a direct empirical observation from experience that raises no physical or philosophical inconsistencies, but fully confirms empirical experience of subjective physical conscious agency, consistent with civil and criminal law of conscious intentional responsibility. Ryle's strategy is linguistic. He attacks both the ontological commitment (the view that mind and body are somehow fundamentally different or distinct, but nonetheless interact) and the epistemological commitment (the inability to confirm other people are conscious because subjectivity is private) of what he calls the "official doctrine" (Tanney 2022). The problem is that, by dealing with it in a purely linguistic analysis, we are dealing only with objective semantic and grammatical connotations so the argument is intrinsically objective. We know that subjectivity is private and objectivity is public. That's just the way it is! We also know that in all our discourses subjective-objective interactivity occurs. A hundred percent of our experience is subjective and the world around us is inferred from our subjectively conscious experiences of it.
The way out is not to deny mind, or consciousness itself which we are all trying to fathom, or we are back to the hard problem of the objectively unfathomable explanatory gap. The way out is that the above statement "you have demonstrated that your subjective conscious volition has efficacy over the physical universe" is something that also involves conscious physical volition we can mutually agree on because it's evidenced in our behaviour in consciously responding to one another. Ryle is sitting by himself in his office dreaming up linguistic contradictions, but these evaporate through mutual affirmation of subjective volition. That's the transactional principle manifest. Then the category error vanishes in the subjective empirical method. This is why extending the hard problem to volition has been essential, because it's the product of conscious volition in behaviour that is verifiable.
In Stanford (Tanney 2022) notes that Cartesianism is at worst "dead" in only one of its ontological aspects. Substance dualism may have been repudiated but property dualism still claims a number of contemporary defenders. Furthermore, although Descartes embraced a form of substance dualism, in the sense that the pineal acted in response to the soul by making small movements that initiated wider responses in the brain, the pineal is still a biological entity, so the category error is misconceived. His description is remarkably similar to instabilities in brain dynamics potentially inducing global changes in brain dynamics. Compounded with the inability of materialism to solve the hard problem, science is thus coming full circle. It is not just a question of sentence construction but Cosmology.
But Ryle’s rejection of Cartesian dualism led to a second paradigm shift in which molecular biology, succeeding Watson and Crick’s discovery of the structure of DNA, led to ever more effective ‘laying bare’ of all biological processes including the brain, accompanied by new technologies of multi-electrode EEG and MEG and functional fMRI imaging using magnetic resonance imaging. So that subjective consciousness became effectively ignored in the cascade of purely functionalist results of how human brain dynamics occurs.
Anil Seth (2018) notes:
The relationship between subjective conscious experience and its biophysical basis has always been a defining question for the mind and brain sciences. But, at various times since the beginnings of neuroscience as a discipline, the explicit study of consciousness has been either treated as fringe or excluded altogether. Looking back over the past 50 years, these extremes of attitude are well represented. Roger Sperry (1969, 532), pioneer of split-brain operations and of what can now be called ‘consciousness science’ lamented in 1969 that ‘most behavioral scientists today, and brain researchers in particular, have little use for consciousness’. Presciently, in the same article he highlighted the need for new technologies able to record the ‘pattern dynamics of brain activity’ in elucidating the neural basis of consciousness. Indeed, modern neuroimaging methods have had a transformative impact on consciousness science, as they have on cognitive neuroscience generally.
Informally, consciousness science over the last 50 years can be divided into two epochs. From the mid-1960s until around 1990 the fringe view held sway, though with several notable exceptions. Then, from the late 1980s and early 1990s, first a trickle and more recently a deluge of research into the brain basis of consciousness, a transition catalysed by – among other things – the activities of certain high-profile scientists (e.g. the Nobel laureates Francis Crick and Gerald Edelman) and by the maturation of modern neuroimaging methods, as anticipated by Sperry.
Symbiotic cosmology, based on complementary, unlike a strictly dualist description, is coherent. This coherence – forming a complete whole without discrete distinction – is manifestly true in that we can engage either a subjective discourse on our experiences or an objective account of their material circumstances in every situation in waking life, just as the wave and particle aspects of quanta are coherent and cannot be separated, as complementary manifestations. We thus find that the human discourse on our existential condition has two complementary modes, the one fixed in the objective physical description of the world around us using logical and causal operations and the other describing our subjective conscious experiences, as intelligent sensual beings, which are throughout our lives, our sole source of personal knowledge of the physical world around us, without which we would have no access to the universe at large, let alone to our dreams, memories and reflections (Jung 1963), all of which are conscious in nature, and often ascribed to be veridical, rather than imaginary, in the case of dreams and visionary states.
In Erwin Schrödinger’s words (1944): “The world is a construction of our sensations, perceptions, memories. It is convenient to regard it as existing objectively on its own. But it certainly does not become manifest by its mere existence” … “The reason why our sentient, percipient and thinking ego is met nowhere within our scientific world picture can easily be indicated in seven words: Because it is itself that world picture”.
A central problem faced by detractors of the role of consciousness in both the contexts of the brain and the quantum universe is that many of the materialist arguments depend on an incorrectly classical view of causality, or causal closure, in the context of brain dynamics, which are fundamentally inconsistent with quantum reality. In the brain context, this is purported to eliminate an adaptive role for consciousness in human and animal survival, reducing it to a form of epiphenomenalism, in which volitional will would be a self-serving delusion. This follows lines of thinking derived from computational ideas that interfering with a computational process would hinder its efficiency.
In relation to volitional will, Chalmers & McQueen (2021) note: “There are many aspects to the problem of consciousness, including the core problem of why physical processes should give rise to consciousness at all. One central aspect of the problem is the consciousness-causation problem: It seems obvious that consciousness plays a causal role, but it is surprisingly hard to make sense of what this role is and how it can be played.”
The problem with the idea of objective brain processing being causally closed is fivefold. Firstly the key challenges to organismic survival are computationally intractable, open environment problems which may be better served by edge of chaos dynamics than classical computation. Secondly, many problems of survival are not causally closed at all because both evolution and organismic behaviour are creative processes, in which there are many viable outcomes, not just a single logically defined, or optimal one. Thirdly, quantum uncertainty and its deeper manifestations in entanglement, are universal, both in the brain and the environment, so there are copious ways for consciousness to intervene, without disrupting causally deterministic processes, and this appears to be its central cosmological role. Fourthly, the notion runs headlong into contradiction with our everyday experience of volition, in which we are consciously aware of our volitional intent and of its affects both in our purposive decision-making and acts affecting the world around us. For causal closure to be true, all our purposive decisions upon which we depend for our survival would be a perceptual delusion, contradicting the manifest nature of veridical perception generally. Fifthly, the work of Libet through to Schurger et al. demonstrates causal closure is unproven and is unlikely to remain so given the edge-of-chaos instability of critical brain processes in decision-making in the quantum universe.
Hopeful Monster 2: Consciousness and Surviving in the Wild v Attention Schema Theory
Real world survival problems in the open environment don’t necessarily have a causally-closed or even a computationally tractable solution, due to exponential runaway like the travelling salesman problem, thus requiring sensitive dependence on the butterfly effect and intuitive choices. Which route should the antelope take to reach the water hole when it comes to the fork in the trail? The shady path where a tiger might lurk, or the savannah where there could be a lion in the long grass? All the agents are conscious sentient beings using innovation and stealth and so computations depending on reasoned memory are unreliable because the adversaries can also adapt their strategies and tactics to frustrate the calculations. The subtlest sensory hints of crisis amid split-second timing is also pivotal. There is thus no tractable solution. Integrated anticipatory intuition, combined with a historical knowledge of the terrain, appears to be the critical survival advantage of sentient consciousness in the prisoners’ dilemma of survival, just as sexuality is, in the Red Queen race (Ridley 1996) between hosts and parasites. This coherent anticipation possessed by subjective consciousness appears to be the evolutionary basis for the emergence and persistence of subjective consciousness as a quantum-derived form of anticipation of adventitious risks to survival, not cognitive processes of verbal discourse.
Michael Graziano’s (2016, 2017, Webb & Graziano 2015), attention schema theory, or AST, self-described as a mechanistic account of subjective awareness which emerged in parallel with my own work (King 2014), gives an account of the evolutionary developments of the animal brain, taking account of the adaptive processes essential for survival to arrive at the kind of brains and conscious awareness we experience:
“We propose that the top–down control of attention is improved when the brain has access to a simplified model of attention itself. The brain therefore constructs a schematic model of the process of attention, the ‘attention schema,’ in much the same way that it constructs a schematic model of the body, the ‘body schema.’ The content of this internal model leads a brain to conclude that it has a subjective experience – a non-physical, subjective awareness and assigns a high degree of certainty to that extraordinary claim”.
Fig 91: Which route should the antelope take to reach the water hole when it comes to the fork in the trail? The shady path where a tiger might lurk, or the savannah where there could be a lion in the long grass? Real world survival problems require intuitive multi-option decisions, creativity and and often split-second timing requiring anticipatory consciousness. Thus modelling the existence of subjective consciousness or otherwise based only on causal concepts and verbal reasoning processes gives a false evolutionary and cosmological view. Here is where the difference between a conscious organism and an AI robot attempting to functionally emulate it is laid bare in tooth and claw.
However, this presents the idea that subjective consciousness and volitional will are a self-fulfilling evolutionary delusion so that the author believes AST as a purely mechanistic principle could in principle be extended to a machine without the presence of subjective consciousness: “Such a machine would “believe” it is conscious and act like it is conscious, in the same sense that the human machine believes and acts”.
However it remains unclear that a digital computer, or AI process can achieve this with given architectures. Ricci et al. (2021) note in concluding remarks towards one of the most fundamental and elementary tasks, abstract same-different discrimination: The aforementioned attention and memory network models are stepping stones towards the flexible relational reasoning that so epitomizes biological intelligence. However, current work falls short of the — in our view, correct — standards for biological intelligence set by experimentalists like Delius (1994) or theorists like Fodor (1988).
Yet AST is a type of filter theory similar to Huxley’s ideas about consciousness, so it invokes a principle of neural organisation that is consistent with and complementary to subjective consciousness: “Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence.”
The overall idea of a purely physical internal model of reality representing its own attention process, thus enabling it to observe itself, is an astute necessary condition for the sort of subjective consciousness we find in the spread of metazoa, but it is in no way sufficient to solve the hard problem or address any more than the one easy problem it addresses, about recursive attention. However its description, of fundamental changes in overall brain architecture summarised in Graziano (2016) highlights the actual evolutionary forces shaping the development of the conscious mind lie in the paranoia of survival the jungle as noted in fig 91, rather than the verbal contortions of philosophical discourse:
“If the wind
rustles the grass and you misinterpret it as a lion, no harm done.
But
if you fail to detect an actual lion, you’re taken out of the
gene pool” (Michael Graziano 2016).
However Graziano (2020), in claiming why AST “has to be right”, commits to de-subjectifying consciousness in favour of an AI analysis of recursive attention systems. In relation to the reality of consciousness in his words, the claim that: “I have a subjective, conscious experience. It’s real; it’s the feeling that goes along with my brain’s processing of at least some things. I say I have it and I think I have it because, simply, I do have it. Let us accept its existence and stop quibbling about illusions”, he attempts a structural finesse based on recursive attention:
Suppose the brain has a real consciousness. Logically, the reason why we intuit and think and say we have consciousness is not because we actually have it, but must be because of something else; it is because the brain contains information that describes us having it. Moreover, given the limitations on the brain’s ability to model anything in perfect detail, one must accept that the consciousness we intuit and think and say we have is going to be different from the consciousness that we actually have. . … I will make the strong claim here that this statement – the consciousness we think we have is different from, simpler than, and more schematic than, the consciousness we actually have – is necessarily correct. Any rational, scientific approach must accept that conclusion. The bane of consciousness theorizing is the naïve, mistaken conflation of what we actually have with what we think we have. The attention schema theory systematically unpacks the difference between what we actually have and what we think we have. In AST, we really do have a base reality to consciousness: we have attention – the ability to focus on external stimuli and on internal constructs, and by focusing, process information in depth and enable a coordinated reaction. We have an ability to grasp something with the power of our biological processor. Attention is physically real. It’s a real process in the brain, made out of the interactions of billions of neurons. The brain not only uses attention, but also constructs information about attention – a model of attention. The central hypothesis of AST is that, by the time that information about attention reaches the output end of the pathway … , we’re claim-ing to have a semi-magical essence inside of us – conscious awareness. The brain describes attention as a semi-magical essence because the mechanistic details of attention have been stripped out of the description.
These are simply opinions of a hidden underlying information structure, confusing conscious experience itself with the recursive attention structures that any realistic description has to entail to bring physical brain processing into any kind of concordance with environmental reality. His inability to distinguish organismic consciousness from AI is evidenced in Graziano (2017) where he sets out AST as a basis for biologically realisable artificial intelligence systems.
The actual answer to this apparent paradox that leaves our confidence in our conscious volition in tatters, is that the two processes, neural net attention schemes and subjective consciousness have both been selected by evolution to ensure survival of the organism from existential threats and they have done so as complementary processes. Organismic brains evolved from the excitable sentience of single-celled eucaryotes and their social signalling molecules that became our neurotransmitters a billion yers after these same single-celled eucaryotes had to solve just these problems of growth and survival in the open environment. Brains are thus built as an intimately coupled society of eucaryote excitable cells communicating by both electrochemical and biochemical means via neurotransmitters, in such a way that the network process is an evolutionary elaboration of the underlying cellular process, both of which have been conserved by natural selection because both contribute to organismic survival by anticipating existential threats.
This is the only possible conclusion, because the presence of attention schemae does not require the manifestation of subjective consciousness to the conscious participant unless that too plays an integral role in survival of the organism. Indeed an artificial neural net with recursive schemes would do just that and have no consciousness implied, as it would be superfluous to energy demands unless it had selective advantage.
An adjunct notion is the ALARM theory (Newen & Montemayor 2023), we need to distinguish two levels of consciousness, namely basic arousal and general alertness. Basic arousal functions as a specific alarm system, keeping a biological organism alive under sudden intense threats, and general alertness enables flexible learning and behavioural strategies. This two-level theory of consciousness helps us to account for recent discoveries of subcortical brain activities with a central role of thalamic processes, and observations of differences in the behavioural repertoire of non-human animals indicating two types of conscious experiences. The researchers claim his enables them to unify the neural evidence for the relevance of sub-cortical processes, and of cortico-cortical loops, on the other, and to clarify the evolutionary and actual functional role of conscious experiences.
They derive evidence primarily from two animal studies. In Afrasiabi et al. (2021) macaques were anaesthetised, and the researchers stimulated the central lateral thalamus. The stimulation acted as a switch to trigger consciousness. However, it only prompted fundamental arousal because the macaques could feel pain, see things, and react to them, but they were unable, unlike regular wakefulness, to participate in learning tasks. A second experiment, Nakajima et al. (2019), provides evidence mice possess general wakefulness in their daily lives. The animals were trained to respond to a sound differently than to a light signal. They were also capable of interpreting a third signal that indicated whether they should focus on the sound or the light signal. Given that the mice learned this quickly, it is clear that they have acquired learning with focused conscious attention and, therefore, possess general vigilance.
In "Homo Prospectus" (Seligman et al. 2016), which asserts that the unrivalled human ability to be guided by imagining alternatives stretching into the future – “prospection” – uniquely describes Homo sapiens, addresses the question of how ordinary conscious experience might relate to the prospective processes that by contrast psychology’s 120-year obsession with memory (the past) and perception (the present) and its absence of serious work on such constructs as expectation, anticipation, and will. Peter Railton cites:
Intuition: The moment-to-moment guidance of thought and action is typically intuitive rather than deliberative. Intuitions often come unbidden, and we can seldom explain just where they came from or what their basis might be. They seem to come prior to judgment, and although they often inform judgment, they can also stubbornly refuse to line up with our considered opinions.
Affect: According to the prospection hypothesis, our emotional or affective system is constantly active because we are constantly in the business of evaluating alternatives and selecting among them.
Information: A system of prospective guidance is information-intensive, calling for individuals to attend to many variables and to update their values continuously in response to experience.
They also see deliberative cognitive processes as intertwined with and integrated by intuitive processes:
One view, which we call the separate processors view, says intuition and deliberation are separate, distinct modes of thought. An opposing view says intuition and deliberation are thoroughly intertwined; deliberation is constructed with intuition as a main ingredient. On this second view, there aren’t two independent processors. Rather, deliberation depends fundamentally on intuitive affective evaluations.
They associate imagination with the wandering mind, which we shall see is identifiable with the default mode network critical in ego dissolution and central to rehearsing survival strategies:
Think about what goes consciously through your mind during idle moments. This is mind-wandering, and it is deeply puzzling to theorists. The biggest puzzle is why we do so much of it. One study, which used experience sampling methods with 2,250 adults, found mind-wandering occurred in a remarkable 46.9% of the time points sampled.
On free will, the authors dodge the core philosophical debate, assuming that philosophers of all bents do embrace a form of free will, but instead pragmatically introduce the multiple-options question that plagues all environmental survival decisions:
We will argue that the distinctive mark of human freedom is latitude. Latitude refers to what agents have when the “size” of their option set is large. For now, we can say an agent has more latitude when the number of distinct options in the option set is larger. A bit later, we will provide a more refined account of how to understand the “size” of an option set.
Some anticipatory aspects of our conscious experience of the world make it possible for the brain to sometimes construct a present that has never actually occurred. In the "flash-lag" illusion, a screen displays a rotating disc with an arrow on it, pointing outwards. Next to the disc is a spot of light that is programmed to flash at the exact moment the spinning arrow passes it. Instead, to our experience, the flash lags behind, apparently occurring after the arrow has passed (Westerhoff 2013). One explanation is that our brain extrapolates into the future, making up for visual processing time by predicting where the arrow will be, however, rather than extrapolating into the future, our brain is actually interpolating events in the past, assembling a story of what happened retrospectively, as was shown by a subtle variant of the illusion (Eagleman and Sejnowski 2000).
Given the complementary roles of conscious quantum measurement and edge-of-chaos coherence dynamics, far from being an ephemeral state of a biological organism’s brain dynamics that is irrelevant to the universe at large, the symbiotic cosmology asserts that consciousness has a foundational role in existential cosmology, complementary to the entire phenomenon of the physical universe. The conscious brain may also literally be a/the most complex functional system in the universe, so manifests emergent properties undeveloped in other physical processes. This is not dualistic, but an extension of quantum wave-particle complementarity to a larger complementarity, in which mind is complementary to the universe as a whole. It is thus non-local in a more complete way than the quantum wave aspect is in complementation to the localised pa