Sophisticated models of human social behaviour are fast becoming highly desirable in an increasingly complex and interrelated world. Here, we propose that rather than taking established theories from the physical sciences and naively mapping them into the social world, the advanced concepts and theories of social psychology should be taken as a starting point, and used to develop a new modelling methodology. In order to illustrate how such an approach might be carried out, we attempt to model the low elaboration attitude changes of a society of agents in an evolving social context. We propose a geometric model of an agent in context, where individual agent attitudes are seen to self-organise to form ideologies, which then serve to guide further agent-based attitude changes. A computational implementation of the model is shown to exhibit a number of interesting phenomena, including a tendency for a measure of the entropy in the system to decrease, and a potential for externally guiding a population of agents towards a new desired ideology.
Biological systems exhibit a wide range of contextual effects, and this often makes it difficult to construct valid mathematical models of their behaviour. In particular, mathematical paradigms built upon the successes of Newtonian physics make assumptions about the nature of biological systems that are unlikely to hold true. After discussing two of the key assumptions underlying the Newtonian paradigm, we discuss two key aspects of the formalism that extended it, Quantum Theory (QT). We draw attention to the similarities between biological and quantum systems, motivating the development of a similar formalism that can be applied to the modelling of biological processes.
The contextuality of changing attitudes makes them extremely difficult to model. This paper scales up Quantum Decision Theory (QDT) to a social setting, using it to model the manner in which social contexts can interact with the process of low elaboration attitude change. The elements of this extended theory are presented, along with a proof of concept computational implementation in a low dimensional subspace. This model suggests that a society's understanding of social issues will settle down into a static or frozen configuration unless that society consists of a range of individuals with varying personality types and norms.
Free association norms indicate that words are organized into semantic/associative neighborhoods within a larger network of words and links that bind the net together. We present evidence indicating that memory for a recent word event can depend on implicitly and simultaneously activating related words in its neighborhood. Processing a word during encoding primes its network representation as a function of the density of the links in its neighborhood. Such priming increases recall and recognition and can have long lasting effects when the word is processed in working memory. Evidence for this phenomenon is reviewed in extralist cuing, primed free association, intralist cuing, and single-item recognition tasks. The findings also show that when a related word is presented to cue the recall of a studied word, the cue activates it in an array of related words that distract and reduce the probability of its selection. The activation of the semantic network produces priming benefits during encoding and search costs during retrieval. In extralist cuing recall is a negative function of cue-to-distracter strength and a positive function of neighborhood density, cue-to-target strength, and target-to-cue strength. We show how four measures derived from the network can be combined and used to predict memory performance. These measures play different roles in different tasks indicating that the contribution of the semantic network varies with the context provided by the task. We evaluate spreading activation and quantum-like entanglement explanations for the priming effect produced by neighborhood density.
Conceptual combination performs a fundamental role in creating the broad range of compound phrases utilized in everyday language. This article provides a novel probabilistic framework for assessing whether the semantics of conceptual combinations are compositional, and so can be considered as a function of the semantics of the constituent concepts, or not. While the systematicity and productivity of language provide a strong argument in favor of assuming compositionality, this very assumption is still regularly questioned in both cognitive science and philosophy. Additionally, the principle of semantic compositionality is underspecified, which means that notions of both "strong" and "weak" compositionality appear in the literature. Rather than adjudicating between different grades of compositionality, the framework presented in this article contributes formal methods for determining a clear dividing line between compositional and non-compositional semantics. In addition, the article suggests that the distinction between these is contextually sensitive. Utilizing formal frameworks developed for analyzing composite systems in quantum theory, we present two methods that allow the semantics of conceptual combinations to be classified as "compositional" or "non- compositional". The first relies on an assumption that the joint probability distribution modeling the combination is factorizable. The second provides a necessary and sufficient condition for the joint probability distribution to exist. A failure to meet this condition implies that the underlying concepts cannot be modeled in a single probability space when considering their combination. An empirical study of twenty-four novel conceptual combinations showed convincing evidence that some conceptual combinations behave non-compositionally according to this framework.
At the core of our uniquely human cognitive abilities is the capacity to see things from different perspectives, or to place them in a new context. We propose that this was made possible by two cognitive transitions. First, the large brain of Homo erectus facilitated the onset of recursive recall: the ability to string thoughts together into a stream of potentially abstract or imaginative thought. This hypothesis is supported by a set of computational models where an artificial society of agents evolved to generate more diverse and valuable cultural outputs under conditions of recursive recall. We propose that the capacity to see things in context arose much later, following the appearance of anatomically modern humans. This second transition was brought on by the onset of contextual focus: the capacity to shift between a minimally contextual analytic mode of thought, and a highly contextual associative mode of thought, conducive to combining concepts in new ways and "breaking out of a rut". When contextual focus is implemented in an art-generating computer program, the resulting artworks are seen as more creative and appealing. We summarize how both transitions can be modeled using a theory of concepts which highlights the manner in which different contexts can lead to modern humans attributing very different meanings to the interpretation of one concept.
We utilise the well-developed quantum decision models known to the QI community to create a higher order social decision making model. A simple Agent Based Model (ABM) of a society of agents with changing attitudes towards a social issue is presented, where the private attitudes of individuals in the system are represented using a geometric structure inspired by quantum theory. We track the changing attitudes of the members of that society, and their resulting propensities to act, or not, in a given social context. A number of new issues surrounding this "scaling up" of quantum decision theories are discussed, as well as new directions and opportunities.
Modelling how a word is activated in human memory is an important requirement for determining the probability of recall of a word in an extra-list cueing experiment. Previous research assumed a quantum-like model in which the semantic network was modelled as entangled qubits, however the level of activation was clearly being overestimated. This paper explores three variations of this model which are distinguished by the scaling factor designed to compensate the overestimation.
This paper proposes a well developed mathematical apparatus to determine whether a conceptual combination is compositional, or not. Within cognitive science, systematicity and productivity appear to require that conceptual representation should be compositional, but the need to represent prototypical information implies that concepts must be represented non-compositionally. As a consequence, the question of under what conditions conceptual representation is compositional (or not) remains debatable. By drawing on general probabilistic methods developed in quantum theory to test whether a system is decomposable, or not, a formal test is proposed that can determine whether a specific conceptual combination is compositional. This test examines a joint probability distribution modelling the combination, asking whether or not it is factorizable. Empirical studies indicate that some combinations should be considered as non-compositional.
As computers approach the physical limits of information storable in memory, new methods are needed to advance computing power. We propose proposed is a quantum vector-based approach to memory which may overcome the difficulty of mapping symbolic to subsymbolic representations. The approach is inspired by the structure of human memory and incorporates elements of Gardenfors' vector-based Conceptual Space approach and Humphries et. al's matrix model of memory. Though in its infancy, the quantum information retrieval approach can provide not just exceptionally high density memory storage but creative capacities as well.
Compositionality is a frequently made assumption in linguistics, and yet many human subjects reveal highly non-compositional word associations when confronted with novel concept combinations. This article will show how a non-compositional account of concept combinations can be supplied by modelling them as interacting quantum systems.
Consider the concept combination "pet human".
In word association experiments, human subjects produce the associate "slave" in relation this combination.
The striking aspect of this associate is that it is not produced as an associate of "pet", or "human" in isolation.
In other words, the associate "slave" seems to be emergent.
Such emergent associations sometimes have a creative character and cognitive science is largely silent about how we produce them.
Departing from a dimensional model of human conceptual space, this article will explore concept combinations, and will argue emergent associations are related to abductive reasoning within conceptual space, that is,
How do humans respond to their social context? This question is becoming increasingly urgent in a society where democracy requires that the citizens of a country help to decide upon its policy directions, and yet those citizens frequently have very little knowledge of the complex issues that these policies seek to address. Frequently, we find that humans make their decisions more with reference to their social setting, than to the arguments of scientists, academics, and policy makers. It is broadly anticipated that the agent based modelling (ABM) of human behaviour will make it possible to treat such social effects, but we take the position here that a more sophisticated treatment of context will be required in many such models. While notions such as historical context (where the past history of an agent might affect its later actions) and situational context (where the agent will choose a different action in a different situation) abound in ABM scenarios, we will discuss a case of a potentially changing context, where social effects can have a strong influence upon the perceptions of a group of subjects. In particular, we shall discuss a recently reported case where a biased worm in an election debate led to significant distortions in the reports given by participants as to who won the debate (Davis et al 2011). Thus, participants in a different social context drew different conclusions about the perceived winner of the same debate, with associated significant differences among the two groups as to who they would vote for in the coming election. We extend this example to the problem of modelling the likely electoral responses of agents in the context of the climate change debate, and discuss the notion of interference between related questions that might be asked of an agent in a social simulation that was intended to simulate their likely responses. A modelling technology which could account for such strong social contextual effects would benefit regulatory bodies which need to navigate between multiple interests and concerns, and we shall present one viable avenue for constructing such a technology. A geometric approach will be presented, where the internal state of an agent is represented in a vector space, and their social context is naturally modelled as a set of basis states that are chosen with reference to the problem space.
Modeling how a word is activated in human memory is an important feature for determining the probability of recall of a word in an extra-list cueing experiment. The spreading activation, spooky-action-at-a-distance and entanglement models have been used to model the activation of a word. Recently a hypothesis was put forward where the mean activation levels of the respective models are as follows: Spreading-activation ≤ Entanglement ≤ Spooky-action-at-a-distance. This article investigates this hypothesis by means of a substantial empirical analysis of each model using the University of South Florida word association, rhyme and word fragment norms.
Vector space based approaches to natural language processing are contrasted with human similarity judgements to show the manner in which human subjects fail to produce data which satisfies all requirements for a metric space. This result would constrains the validity and applicability vector space based (and hence also quantum inspired) approaches to the modelling of cognitive processes. This paper proposes a resolution to this problem, by arguing that pairs of words imply a context which in turn induces a point of view, so allowing a subject to estimate semantic similarity. Context is here introduced as a point of view vector (POVV) and the expected similarity is derived as a measure over the POVV's. Different pairs of words will invoke different contexts and different POVV's. We illustrate the proposal on a few triples of words and outline further research.
Measures and theories of information abound, but there are few formalised methods for treating the contextuality that can manifest in different information systems. Quantum theory provides one possible formalism for treating information in context. This paper introduces a quantum-like model of the human mental lexicon, and shows one set of recent experimental data suggesting that concept combinations can indeed behave non-separably. There is some reason to believe that the human mental lexicon displays entanglement.
Separability is a concept that is very difficult to define, and yet much of our scientific method is implicitly based upon the assumption that systems can sensibly be reduced to a set of interacting components. This paper examines the notion of separability in the creation of bi-ambiguous compounds that is based upon the CHSH and CH inequalities. It reports results of an experiment showing that violations of the CHSH and CH inequality can occur in human conceptual combination.
This article introduces a "pseudo classical" notion of modelling non-separability of phenomena. This form of non-separability can viewed as lying between separability and quantum-like non-separability. Non-separability is formalized in terms of the non-factorizabilty of the underlying joint probability distribution. A decision criterium for determining the non-factorizability of the joint distribution is related to determining the rank of a matrix as well as another approach based on the chi-square-goodness-of-fit test. This pseudo-classical notion of non-separability is discussed in terms of quantum games and concept combinations in human cognition.
Language exhibits a number of contextuality and non-separability effects. This paper reviews a new set of models showing promise for capturing this complexity which are based upon a quantum-like approach.
Science has been under attack in the last thirty years, and recently a number of prominent scientists have been busy fighting back. Here, an argument is presented that the `science wars' stem from an unreasonably strict adherence to the reductive method on the part of science, but that weakening this stance need not imply a lapse into subjectivity. One possible method for formalising the description of non-separable, contextually dependent complex systems is presented. This is based upon a quantum-like approach.
Following an early claim by Nelson & McEvoy suggesting that word associations can display `spooky action at a distance behaviour', a serious investigation of the potentially quantum nature of such associations is currently underway. In this paper quantum theory is proposed as a framework suitable for modelling the mental lexicon, specifically the results obtained from both intralist and extralist word association experiments. Some initial models exploring this hypothesis are discussed, and they appear to be capable of substantial agreement with pre-existing experimental data. The paper concludes with a discussion of some experiments that will be performed in order to test these models.
In this third Quantum Interaction (QI) meeting it is time to examine our failures. One of the weakest elements of QI as a field, arises in its continuing lack of models displaying proper evolutionary dynamics. This paper presents an overview of the modern generalised approach to the derivation of time evolution equations in physics, showing how the notion of symmetry is essential to the extraction of operators in quantum theory. The form that symmetry might take in non-physical models is explored, with a number of viable avenues identified.
Following an early claim by Nelson & McEvoy suggesting that word associations can display `spooky action at a distance behaviour', a serious investigation of the potentially quantum nature of such associations is currently underway. This paper presents a simple quantum model of a word association system. It is shown that a quantum model of word entanglement can recover aspects of both the Spreading Activation model and the Spooky model of word association experiments.
According to recent studies in developmental psychology and neuroscience, symbolic language is essentially intersubjective. Empathetically relating to others renders possible the acquisition of linguistic constructs. Intersubjectivity develops in early ontogentic life when interactions between mother and infant mutually shape their relatedness. Empirical findings suggest that the shared attention and intention involved in those interactions is sustained as it becomes internalized and embodied. Symbolic language is derivative and emerges from shared intentionality. In this paper, we present a formalization of shared intentionality based upon a quantum approach. From a phenomenological viewpoint, we investigate the nonseparable, dynamic and sustainable nature of social cognition and evaluate the appropriateness of quantum interaction for modelling intersubjectivity.
Information systems are socio-technical systems. Their design, analysis and implementation requires appropriate languages for representing social and technical concepts. However, many symbolic modelling approaches fall into the trap of underemphasizing social aspects of information systems. This often leads to an inability of ontological models to incorporate effects such as contextual dependence and emergence. Moreover, as designers take the perspective of people living with and alongside the information system to be modelled social interaction becomes a primary concern. Ontologies are too prescriptive and do not account properly for social concepts. Based on State-Context-Property (SCoP) systems we propose a quantum-inspired approach for modelling information systems.
Despite the general recognition of complexity as an important concept and decades of work, very little progress has been made in the attempt to define complexity. It is suggested that this is due to the fact that the definition of complex behaviour is itself complex, forming a scale from the simple to the more and more complex. Those systems at the high end of the scale are not at present well modelled, and reasons why this might be the case are presented. The possibility that quantum theories may be able to model such high end complexity is investigated.
This article investigates one of the fundamental issues confronting a field that investigates quantum interaction; namely why is it necessary? The need to investigate an interaction using a quantum formalism is argued to arise when the system under study is sufficiently complex. In particular, if the system is displaying contextual behaviour then a quantum approach often incorporates this behaviour very naturally. Thus, a way in which much of the disparte work in the field of quantum interaction can be both justified to the broader community and eventually unified is presented. The nature of contextual behaviour and its relationship to nonlocality is explored.
Human memory experiments appear to be generating "nonlocal" effects. In this paper the possibility that words might be entangled in human semantic space is seriously entertained. This approach leads to a very natural picture of the way in which context might affect word association via the standard interpretation of quantum measurement. Two possible scenarios for testing such a hypothesis are suggested, both based upon potential violations of the CHSH inequality.
Originally based upon a pregeometric model of the Universe, Process Physics has now been formulated as far more general modelling paradigm that is capable of generating complex emergent behaviour. This article discusses the original relational model of Process Physics and the emergent hierarchical structure that it generates, linking the reason for this emergence to the historical basis of the model in quantum field theory. This historical connection is used to motivate a new interpretation of the general class of quantum theories as providing models of certain aspects of complex behaviour. A summary of this new realistic interpretation of quantum theory is presented and some applications of this viewpoint to the description of complex emergent behaviour are sketched out.
Despite its early successes, ALife has not tended to live up to its original promise, with any emergent behaviour very rarely manifesting itself in such a way that new higher level emergence can occur. This problem has been recognised in two related concepts; the failure of ALife simulations to display Open Ended Evolution, and their inability to dynamically generate more than two hierarchical levels of behaviour. This paper will suggest that these problems of ALife stem from a missing sense of contextuality in the models of ALife. A number of theories which exhibit some form of contextual dependence will be discussed, in particular, the gauge theories of quantum field theory.
The Michelson-Morley interferometer experiments were designed to measure the speed of the Earth through the aether. The results were always believed to have been null - no effect. This outcome formed the basis for Einstein's Special and General Relativity formalism. The new process physics shows that absolute motion, now understood to be relative to the quantum foam that is space, is observable, but only if the interferometer operates in gas mode. A re-analysis here shows that the results from the gas-mode interferometers were not null, but in fact large when re-analysed to take account of the effect of the air, or helium, in which the apparatus operated. The speed of absolute motion is comparable to that determined from the Cosmic Background Radiation anisotropy, but the direction is not revealed. So absolute motion is meaningful and measureable, thus refuting Einstein's assumption. This discovery shows that a major re-assessment of the interpretation of the Special and General Relativity formalism is called for, a task already provided by Process Physics. This new information-theoretic physics makes it clear that Michelson-Morley type experiments are detecting motion through the quantum foam, which is space. Hence we see direct evidence of quantum gravity effects, as predicted by Process Physics
A new process orientated physics is being developed at Flinders University. These ideas were initially motivated by deep unsolved problems in fundamental physics, such as the difficulty of quantizing gravity, the missing arrow of time, the question of how to interpret quantum mechanics, and perhaps most importantly, a problem with the very methodology of our fundamental descriptions of the Universe. A proposed solution to these problems, Process Physics, has led to what can be viewed as a hierarchical model of reality featuring a Universe that exhibits behaviour very reminiscent of living systems.
The new Process Physics models reality as self-organising relational information and takes account of the limitations of logic, discovered by Godel and extended by Chaitin, by using the concept of self-referential noise. Space and quantum physics are emergent and unified, and described by a Quantum Homotopic Field Theory of fractal topological defects embedded in a three-dimensional fractal process-space.
Despite a general recognition of the importance of complex systems, there is a dearth of general models
capable of describing their dynamics. This is attributed to a complexity scale; the models are attempting to
describe systems at different parts of the scale and are hence not compatible. We require new
models capable of describing complex behaviour at different points of the complexity scale. This work identifies, and proceeds to examine systems at the high end of the complexity scale, those which have not to date been well understood by our current modelling
methodology. It is shown that many such models exhibit what might be termed contextual dependency, and that it is
precisely this feature which is not well understood by our current modelling methodology. A particular problem is
discussed; our apparent inability to generate systems which display high end complexity, exhibited by for example
the general failure of strong ALife. A new model, Process Physics, that has been developed at Flinders University
is discussed, and arguments are presented that it exhibits high end complexity. The features of this model that
lead to its displaying such behaviour are discussed, and the generalisation of this model to a broader range of
complex systems is attempted.
Themes: contextuality and complexity; reductive failure; Process Physics; quantum theories as models of complexity
You can download a copy of its frontmatter, or if you like, the entire manuscript.