Feed aggregator

COSMOS MIND AND MATTER: Is Mind in Spacetime?

Complexity Digest - Mon, 07/08/2024 - 03:00

Stuart Kauffman, Sudip Patra

BioSystems

We attempt in this article to formulate a conceptual and testable framework weaving Cosmos, Mind and Matter into a whole. We build on three recent discoveries, each requiring more evidence: i. The particles of the Standard Model, SU(3) x SU(2) x U(1), are formally capable of collective autocatalysis. This leads us to ask what roles such autocatalysis may have played in Cosmogenesis, and in trying to answer, Why our Laws? Why our Constants? A capacity of the particles of SU(3) x SU(2) x U(1) for collective autocatalysis may be open to experimental test, stunning if confirmed. ii. Reasonable evidence now suggests that matter can expand spacetime. The first issue is to establish this claim at or beyond 5 sigma if that can be done. If true, this process may elucidate Dark Matter, Dark Energy and Inflation and require alteration of Einstein’s Field Equations. Cosmology would be transformed. iii. Evidence at 6.49 Sigma suggests that mind can alter the outcome of the two-slit experiment. If widely and independently verified, the foundations of quantum mechanics must be altered. Mind plays a role in the universe. That role may include Cosmic Mind.

Read the full article at: www.sciencedirect.com

The development of ecological systems along paths of least resistance

Complexity Digest - Sun, 07/07/2024 - 13:46

Jie Deng, Otto X. Cordero, Tadashi Fukami, Simon A. Levin, Robert M. Pringle, Ricard Solé, Serguei Saavedra

A long-standing question in biology is whether there are common principles that characterize the development of ecological systems (the appearance of a group of taxa), regardless of organismal diversity and environmental context. Classic ecological theory holds that these systems develop following a sequenced orderly process that generally proceeds from fast-growing to slow-growing taxa and depends on life-history trade-offs. However, it is also possible that this developmental order is simply the path with the least environmental resistance for survival of the component species and hence favored by probability alone. Here, we use theory and data to show that the order from fast-to slow-growing taxa is the most likely developmental path for diverse systems when local taxon interactions self-organize to minimize environmental resistance. First, we demonstrate theoretically that a sequenced development is more likely than a simultaneous one, at least until the number of iterations becomes so large as to be ecologically implausible. We then show that greater diversity of taxa and life histories improves the likelihood of a sequenced order from fast-to slow-growing taxa. Using data from bacterial and metazoan systems, we present empirical evidence that the developmental order of ecological systems moves along the paths of least environmental resistance. The capacity of simple principles to explain the trend in the developmental order of diverse ecological systems paves the way to an enhanced understanding of the collective features characterizing the diversity of life.

Read the full article at: www.biorxiv.org

Conscious artificial intelligence and biological naturalism

Complexity Digest - Sat, 07/06/2024 - 13:42

Anil Seth

As artificial intelligence (AI) continues to develop, it is natural to ask whether AI systems can be not only intelligent, but also conscious. I consider why some people think AI might develop consciousness, identifying some biases that lead us astray. I ask what it would take for conscious AI to be a realistic prospect, pushing back against some common assumptions such as the notion that computation provides a sufficient basis for consciousness. I’ll instead make the case for taking seriously the possibility that consciousness might depend on our nature as living organisms – a form of biological naturalism. I will end by exploring some wider issues including testing for consciousness in AI, and ethical considerations arising from AI that either actually is, or convincingly seems to be, conscious.

Read the full article at: osf.io

Infection patterns in simple and complex contagion processes on networks

Complexity Digest - Fri, 07/05/2024 - 13:50

Contreras DA, Cencetti G, Barrat A

PLoS Comput Biol 20(6): e1012206.

Contagion processes, representing the spread of infectious diseases, information, or social behaviors, are often schematized as taking place on networks, which encode for instance the interactions between individuals. We here observe how the network is explored by the contagion process, i.e. which links are used for contagions and how frequently. The resulting infection pattern depends on the chosen infection model but surprisingly not all the parameters and models features play a role in the infection pattern. We discover for instance that in simple contagion processes, where contagion events involve one connection at a time, the infection patterns are extremely robust across models and parameters. This has consequences in the role of models in decision-making, as it implies that numerical simulations of simple contagion processes using simplified settings can bring important insights even in the case of a new emerging disease whose properties are not yet well known. In complex contagion models instead, in which multiple interactions are needed for a contagion event, non-trivial dependencies on model parameters emerge and infection patterns cannot be confused with those observed for simple contagion.

Read the full article at: journals.plos.org

Evolutionary Implications of Self-Assembling Cybernetic Materials with Collective Problem-Solving Intelligence at Multiple Scales

Complexity Digest - Thu, 07/04/2024 - 17:51

Hartl, B.; Risi, S.; Levin, M.

Entropy 2024, 26, 532

In recent years, the scientific community has increasingly recognized the complex multi-scale competency architecture (MCA) of biology, comprising nested layers of active homeostatic agents, each forming the self-orchestrated substrate for the layer above, and, in turn, relying on the structural and functional plasticity of the layer(s) below. The question of how natural selection could give rise to this MCA has been the focus of intense research. Here, we instead investigate the effects of such decision-making competencies of MCA agential components on the process of evolution itself, using in silico neuroevolution experiments of simulated, minimal developmental biology. We specifically model the process of morphogenesis with neural cellular automata (NCAs) and utilize an evolutionary algorithm to optimize the corresponding model parameters with the objective of collectively self-assembling a two-dimensional spatial target pattern (reliable morphogenesis). Furthermore, we systematically vary the accuracy with which the uni-cellular agents of an NCA can regulate their cell states (simulating stochastic processes and noise during development). This allows us to continuously scale the agents’ competency levels from a direct encoding scheme (no competency) to an MCA (with perfect reliability in cell decision executions). We demonstrate that an evolutionary process proceeds much more rapidly when evolving the functional parameters of an MCA compared to evolving the target pattern directly. Moreover, the evolved MCAs generalize well toward system parameter changes and even modified objective functions of the evolutionary process. Thus, the adaptive problem-solving competencies of the agential parts in our NCA-based in silico morphogenesis model strongly affect the evolutionary process, suggesting significant functional implications of the near-ubiquitous competency seen in living matter.

Read the full article at: www.mdpi.com

An Invitation to Universality in Physics, Computer Science, and Beyond

Complexity Digest - Thu, 07/04/2024 - 13:48

Tomáš Gonda, Gemma De les Coves

A universal Turing machine is a powerful concept – a single device can compute any function that is computable. A universal spin model, similarly, is a class of physical systems whose low energy behavior simulates that of any spin system. Our categorical framework for universality (arXiv:2307.06851) captures these and other examples of universality as instances. In this article, we present an accessible account thereof with a focus on its basic ingredients and ways to use it. Specifically, we show how to identify necessary conditions for universality, compare types of universality within each instance, and establish that universality and negation give rise to unreachability (such as uncomputability).

Read the full article at: arxiv.org

Minimalist exploration strategies for robot swarms at the edge of chaos

Complexity Digest - Wed, 07/03/2024 - 15:49

Vinicius Sartorio, Luigi Feola, Emanuel Estrada, Vito Trianni, Jonata Tyska Carvalho

Effective exploration abilities are fundamental for robot swarms, especially when small, inexpensive robots are employed (e.g., micro- or nano-robots). Random walks are often the only viable choice if robots are too constrained regarding sensors and computation to implement state-of-the-art solutions. However, identifying the best random walk parameterisation may not be trivial. Additionally, variability among robots in terms of motion abilities-a very common condition when precise calibration is not possible-introduces the need for flexible solutions. This study explores how random walks that present chaotic or edge-of-chaos dynamics can be generated. We also evaluate their effectiveness for a simple exploration task performed by a swarm of simulated Kilobots. First, we show how Random Boolean Networks can be used as controllers for the Kilobots, achieving a significant performance improvement compared to the best parameterisation of a Lévy-modulated Correlated Random Walk. Second, we demonstrate how chaotic dynamics are beneficial to maximise exploration effectiveness. Finally, we demonstrate how the exploration behavior produced by Boolean Networks can be optimized through an Evolutionary Robotics approach while maintaining the chaotic dynamics of the networks.

Read the full article at: arxiv.org

Life as No One Knows It, by Sara Imari Walker

Complexity Digest - Tue, 07/02/2024 - 19:56

An intriguing new scientific theory that explains what life is and how it emerges.

What is life? This is among the most difficult open problems in science, right up there with the nature of consciousness and the existence of matter. All the definitions we have fall short. None help us understand how life originates or the full range of possibilities for what life on other planets might look like.

In Life as No One Knows It, physicist and astrobiologist Sara Imari Walker argues that solving the origin of life requires radical new thinking and an experimentally testable theory for what life is. This is an urgent issue for efforts to make life from scratch in laboratories here on Earth and missions searching for life on other planets.

Walker proposes a new paradigm for understanding what physics encompasses and what we recognize as life. She invites us into a world of maverick scientists working without a map, seeking not just answers but better ways to formulate the biggest questions we have about the universe. The book culminates with the bold proposal of a new theory for identifying and classifying life, one that applies not just to biological life on Earth but to any instance of life in the universe. Rigorous, accessible, and vital, Life as No One Knows It celebrates the mystery of life and the explanatory power of physics.

More at: www.penguinrandomhouse.com

How Is Science Even Possible?

Complexity Digest - Tue, 07/02/2024 - 13:44

How are scientists able to crack fundamental questions about nature and life? How does math make the complex cosmos understandable? In this episode, the physicist Nigel Goldenfeld and co-host Steven Strogatz explore the deep foundations of the scientific process.

Read the full article at: www.quantamagazine.org

Evolving reservoir computers reveals bidirectional coupling between predictive power and emergent dynamics

Complexity Digest - Mon, 07/01/2024 - 07:38

Hanna M. Tolle, Andrea I Luppi, Anil K. Seth, Pedro A. M. Mediano

Biological neural networks can perform complex computations to predict their environment, far above the limited predictive capabilities of individual neurons. While conventional approaches to understanding these computations often focus on isolating the contributions of single neurons, here we argue that a deeper understanding requires considering emergent dynamics – dynamics that make the whole system “more than the sum of its parts”. Specifically, we examine the relationship between prediction performance and emergence by leveraging recent quantitative metrics of emergence, derived from Partial Information Decomposition, and by modelling the prediction of environmental dynamics in a bio-inspired computational framework known as reservoir computing. Notably, we reveal a bidirectional coupling between prediction performance and emergence, which generalises across task environments and reservoir network topologies, and is recapitulated by three key results: 1) Optimising hyperparameters for performance enhances emergent dynamics, and vice versa; 2) Emergent dynamics represent a near sufficient criterion for prediction success in all task environments, and an almost necessary criterion in most environments; 3) Training reservoir computers on larger datasets results in stronger emergent dynamics, which contain task-relevant information crucial for performance. Overall, our study points to a pivotal role of emergence in facilitating environmental predictions in a bio-inspired computational architecture.

Read the full article at: arxiv.org

Impact of navigation apps on congestion and spread dynamics on a transportation network

Complexity Digest - Mon, 07/01/2024 - 07:34

Alben Rome Bagabaldo, Qianxin Gan, Alexandre M. Bayen & Marta C. González

Data Science for Transportation Volume 6, article number 12, (2024)

In recent years, the widespread adoption of navigation apps by motorists has raised questions about their impact on local traffic patterns. Users increasingly rely on these apps to find better, real-time routes to minimize travel time. This study uses microscopic traffic simulations to examine the connection between navigation app use and traffic congestion. The research incorporates both static and dynamic routing to model user behavior. Dynamic routing represents motorists who actively adjust their routes based on app guidance during trips, while static routing models users who stick to known fastest paths. Key traffic metrics, including flow, density, speed, travel time, delay time, and queue lengths, are assessed to evaluate the outcomes. Additionally, we explore congestion propagation at various levels of navigation app adoption. To understand congestion dynamics, we apply a susceptible–infected–recovered (SIR) model, commonly used in disease spread studies. Our findings reveal that traffic system performance improves when 30–60% of users follow dynamic routing. The SIR model supports these findings, highlighting the most efficient congestion propagation-to-dissipation ratio when 40% of users adopt dynamic routing, as indicated by the lowest basic reproductive number. This research provides valuable insights into the intricate relationship between navigation apps and traffic congestion, with implications for transportation planning and management.

Read the full article at: link.springer.com

Computational Life: How Well-formed, Self-replicating Programs Emerge from Simple Interaction

Complexity Digest - Sat, 06/29/2024 - 12:54

Blaise Agüera y Arcas, Jyrki Alakuijala, James Evans, Ben Laurie, Alexander Mordvintsev, Eyvind Niklasson, Ettore Randazzo, Luca Versari

The fields of Origin of Life and Artificial Life both question what life is and how it emerges from a distinct set of “pre-life” dynamics. One common feature of most substrates where life emerges is a marked shift in dynamics when self-replication appears. While there are some hypotheses regarding how self-replicators arose in nature, we know very little about the general dynamics, computational principles, and necessary conditions for self-replicators to emerge. This is especially true on “computational substrates” where interactions involve logical, mathematical, or programming rules. In this paper we take a step towards understanding how self-replicators arise by studying several computational substrates based on various simple programming languages and machine instruction sets. We show that when random, non self-replicating programs are placed in an environment lacking any explicit fitness landscape, self-replicators tend to arise. We demonstrate how this occurs due to random interactions and self-modification, and can happen with and without background random mutations. We also show how increasingly complex dynamics continue to emerge following the rise of self-replicators. Finally, we show a counterexample of a minimalistic programming language where self-replicators are possible, but so far have not been observed to arise.

Read the full article at: arxiv.org

Laplacian Renormalization Group: An introduction to heterogeneous coarse-graining

Complexity Digest - Sat, 06/29/2024 - 11:05

Guido Caldarelli, Andrea Gabrielli, Tommaso Gili, Pablo Villegas

The renormalization group (RG) constitutes a fundamental framework in modern theoretical physics. It allows the study of many systems showing states with large-scale correlations and their classification in a relatively small set of universality classes. RG is the most powerful tool for investigating organizational scales within dynamic systems. However, the application of RG techniques to complex networks has presented significant challenges, primarily due to the intricate interplay of correlations on multiple scales. Existing approaches have relied on hypotheses involving hidden geometries and based on embedding complex networks into hidden metric spaces. Here, we present a practical overview of the recently introduced Laplacian Renormalization Group for heterogeneous networks. First, we present a brief overview that justifies the use of the Laplacian as a natural extension for well-known field theories to analyze spatial disorder. We then draw an analogy to traditional real-space renormalization group procedures, explaining how the LRG generalizes the concept of “Kadanoff supernodes” as block nodes that span multiple scales. These supernodes help mitigate the effects of cross-scale correlations due to small-world properties. Additionally, we rigorously define the LRG procedure in momentum space in the spirit of Wilson RG. Finally, we show different analyses for the evolution of network properties along the LRG flow following structural changes when the network is properly reduced.

Read the full article at: arxiv.org

Beehive scale-free emergent dynamics

Complexity Digest - Sat, 06/29/2024 - 09:14

Ivan Shpurov, Tom Froese & Dante R. Chialvo 

Scientific Reports volume 14, Article number: 13404 (2024)

It has been repeatedly reported that the collective dynamics of social insects exhibit universal emergent properties similar to other complex systems. In this note, we study a previously published data set in which the positions of thousands of honeybees in a hive are individually tracked over multiple days. The results show that the hive dynamics exhibit long-range spatial and temporal correlations in the occupancy density fluctuations, despite the characteristic short-range bees’ mutual interactions. The variations in the occupancy unveil a non-monotonic function between density and bees’ flow, reminiscent of the car traffic dynamic near a jamming transition at which the system performance is optimized to achieve the highest possible throughput. Overall, these results suggest that the beehive collective dynamics are self-adjusted towards a point near its optimal density.

Read the full article at: www.nature.com

Assembly Theory and its Relationship with Computational Complexity

Complexity Digest - Fri, 06/28/2024 - 19:10

Christopher Kempes, Sara I. Walker, Michael Lachmann, Leroy Cronin

Assembly theory (AT) quantifies selection using the assembly equation and identifies complex objects that occur in abundance based on two measurements, assembly index and copy number. The assembly index is determined by the minimal number of recursive joining operations necessary to construct an object from basic parts, and the copy number is how many of the given object(s) are observed. Together these allow defining a quantity, called Assembly, which captures the amount of causation required to produce the observed objects in the sample. AT’s focus on how selection generates complexity offers a distinct approach to that of computational complexity theory which focuses on minimum descriptions via compressibility. To explore formal differences between the two approaches, we show several simple and explicit mathematical examples demonstrating that the assembly index, itself only one piece of the theoretical framework of AT, is formally not equivalent to other commonly used complexity measures from computer science and information theory including Huffman encoding and Lempel-Ziv-Welch compression.

Read the full article at: arxiv.org

Hidden citations obscure true impact in science

Complexity Digest - Fri, 06/28/2024 - 15:07

Xiangyi Meng, Onur Varol, Albert-László Barabási Author Notes

PNAS Nexus, Volume 3, Issue 5, May 2024, page 155,

References, the mechanism scientists rely on to signal previous knowledge, lately have turned into widely used and misused measures of scientific impact. Yet, when a discovery becomes common knowledge, citations suffer from obliteration by incorporation. This leads to the concept of hidden citation, representing a clear textual credit to a discovery without a reference to the publication embodying it. Here, we rely on unsupervised interpretable machine learning applied to the full text of each paper to systematically identify hidden citations. We find that for influential discoveries hidden citations outnumber citation counts, emerging regardless of publishing venue and discipline. We show that the prevalence of hidden citations is not driven by citation counts, but rather by the degree of the discourse on the topic within the text of the manuscripts, indicating that the more discussed is a discovery, the less visible it is to standard bibliometric analysis. Hidden citations indicate that bibliometric measures offer a limited perspective on quantifying the true impact of a discovery, raising the need to extract knowledge from the full text of the scientific corpus.

Read the full article at: academic.oup.com

Feedback: How to Destroy or Save the World — Péter Érdi

Complexity Digest - Fri, 06/28/2024 - 13:46

The book offers an exciting, non-technical intellectual journey around applying feedback control to emerging and managing local and global crises, thus keeping the world on a sustainable trajectory. There is a narrow border between destruction and prosperity: to ensure reasonable growth but avoid existential risk, we must find the fine-tuned balance between positive and negative feedback.  This book addresses readers belonging to various generations, such as: young people growing up in a world where everything seems to be falling apart; people in their 30s and 40s who are thinking about how to live a fulfilling life;  readers in their 50s and 60s thinking back on life; and Baby Boomers reflecting on their past successes and failures.

Read the full article at: link.springer.com

François Chollet on Deep Learning and the Meaning of Intelligence

Complexity Digest - Fri, 06/28/2024 - 12:54

Which is more intelligent, ChatGPT or a 3-year old? Of course this depends on what we mean by “intelligence.” A modern LLM is certainly able to answer all sorts of questions that require knowledge far past the capacity of a 3-year old, and even to perform synthetic tasks that seem remarkable to many human grown-ups. But is that really intelligence? François Chollet argues that it is not, and that LLMs are not ever going to be truly “intelligent” in the usual sense — although other approaches to AI might get there.

Listen at: www.preposterousuniverse.com

Heinz von Foerster’s operational epistemology: orientation for insight into complexity

Complexity Digest - Fri, 06/28/2024 - 12:21

Arantzazu Saratxaga Arregi
Kybernetes

Purpose

Based on the reception of the principle of self-organization, the core of Heinz von Foerster’s operational theories, I hypothesize how Heinz von Foerster’s theory can be an orientation model for the epistemological problem of complexity. I have chosen this study to demonstrate complexity as an epistemological problem. This is because the question of how order arises – the core problem of complexity – is an epistemological question for which Heinz von Foerster developed an epistemology of self-organization. I do not present new research because HvF already had the complex organization of systems in mind. Rather, I build a critical approach to complexity on the research and work on operational epistemology in HvF.

Design/methodology/approach

This article aims to provide an orientation for a philosophical and epistemological understanding of complexity through a reading of Heinz von Foerster’s operational theory. The article attempts to establish complexity as an epistemological phenomenon through the following method: (1) a conceptual description of the science of complexity based on the turn to thermodynamic time, (2) a genealogy of complexity going back to the systemic method, and (3) Heinz von Foerster’s cybernetic approach to self-organization.

Findings

Based on the reception of the principle of self-organization, the core of Heinz von Foerster’s operational theories, the conclusion is drawn that complexity as a description is based on language games.

Research limitations/implications

The results present complexity not as an object of science, but as a description that stands for the understanding of complex description.

Social implications

The hypothesis that complexity is a question of description or observation, i.e. of description for what language serves, has enormous social implications, in that the description of complexes and the recognition of their orders (patterns) cannot be left to algorithmic governmentality, but must be carried out by a social agency.

Originality/value

HvF’s operational epistemology can serve as an epistemological model for critical complexity theory.

Read the full article at: www.emerald.com

Unveiling the reproduction number scaling in characterizing social contagion coverage

Complexity Digest - Fri, 06/28/2024 - 11:09

Xiangrong Wang, Hongru Hou, Dan Lu, Zongze Wu, Yamir Moreno

Chaos, Solitons & Fractals

Volume 185, August 2024, 115119

The spreading of diseases depends critically on the reproduction number, which gives the expected number of new cases produced by infectious individuals during their lifetime. Here we reveal a widespread power-law scaling relationship between the variance and the mean of the reproduction number across simple and complex contagion mechanisms on various network structures. This scaling relation is verified on an empirical scientific collaboration network and analytically studied using generating functions. Specifically, we explore the impact of the scaling law of the reproduction number on the expected size of cascades of contagions. We find that the mean cascade size can be inferred from the mean reproduction number, albeit with limitations in capturing spreading variations. Nonetheless, insights derived from the tail of the distribution of the reproduction number contribute to explaining cascade size variation and allow the distinction between simple and complex contagion mechanisms. Our study sheds light on the intricate dynamics of spreading processes and cascade sizes in social networks, offering valuable insights for managing contagion outbreaks and optimizing responses to emerging threats.

Read the full article at: www.sciencedirect.com

Pages

Subscribe to Self-organizing Systems Lab aggregator