Comments to “Investigations” by Stuart Kauffman
Warning: This is not a review of Kauffman’s book (read it!), only sparse informal comments.
“Investigations” is a great book. It is a huge step in bringing closer biology and physics, the so called “soft” and “hard” sciences. Not because it is able to reduce biology to physics. Quite the opposite. It argues for the need of new laws for understanding biospheres, but nevertheless related to the physical laws. It is just that living organisms have properties that systems which can be studied with classical physics lack. Mainly the fact that living organisms change their environment. Therefore it is difficult (tending to silly) to study them as isolated systems... Moreover, the classic way of studying systems (initial conditions, boundary conditions, laws, and compute away ) falls too short when studying systems which change their own boundaries and environment. Classical physics always assumes “anything else being equal”... but with living organisms, not anything keeps being equal!
Once we begin to observe living systems as open, we see that they affect each other’s fitness. As Kauffman notes, living organisms co-construct each other, their niches, and their search procedures (e.g. sexual reproduction as a way of exploring new genetic combinations). Not only organisms and species are selected according to their fitness, since the fitness landscapes of different species affect each other. But probably also we can speak about selection of fitness landscapes, since those which are more easily searchable by a particular method (mutation, recombination) will have an advantage. But then, the search methods will be selected accordingly to the current fitness landscapes. A co-evolving world...
Definition of Life
What makes a cell alive ? To what extent a cell is alive by itself, and to what extent by an observer describing it as alive? These are tough questions. Kauffman uses the definition of autonomous agent as a necessary and sufficient condition for life. And an autonomous agent is defined as an autocatalytic system capable of reproducing and able to perform one or more thermodynamic work cycles (p. 49). Even when we can say many things about this definition of life, I believe it is the best one we have up to date. We can argue, like to a “shopping list” definition (feeding, check; metabolism, check; reproduction, check; DNA, check...) about the reproducing bit. There are non-reproductive organisms which we consider alive. Ant workers, for example. Then we could say that the colony is the real organism. But what about mules??? Also the “autonomous” bit turns out to be a context-dependent. Not so much in his definition, where he is basically defining what does he mean by autonomy. But cows would not be able to survive without humans taking care of them. Corn has lost its ability to reproduce by itself. We can even say that no living creature is really autonomous because it depends on the energy of the sun. This seems to be the same problem we have when we have to decide the boundaries of a system: if the variables we observe are such, then we can say it is autonomous, if we observe these other variables, then the system is not autonomous...
In any case, the definition is physical in the sense that it requires physical systems where we can measure a work cycle. This is good because it grounds the definition in physical laws, but on the other hand it limits life to a physical realm. How could we define then artificial life, if in computers we do not really need to simulate thermodynamics in order to have life-like systems (e.g. Tierra)? Could we then use the definition of autonomous agent from computer science? (An agent is a thing which does things to other things... if it does it by itself (without user intervention), it is autonomous...). Probably it would be too wide... But probably we could find a general definition which is more functional, not requiring physical concepts. Nevertheless, Kauffman’s definition is based on a (physical) concept of organization, and this allows the study of a general biology, where we can study general properties of living organisms, and we could understand better what <it means|we mean> to be alive.
Organization has been a shaky issue for some time (Ashby, 1962; Beer, 1966; Gershenson and Heylighen, 2003). The problem I see is that we can describe organization in a certain context, but then we can change of context, and we will not call that same phenomenon organized (Gershenson and Heylighen, 2003).
Or probably we could say that organization is the creation of constraints? But then how to measure these, in case we do not know them beforehand? Would there still be organization, or we would need to perceive it (i.e. the constraints)?
An issue always arises with the second law of thermodynamics, since it can be used to describe organization. First of all, it is only one way of <measuring|describing> organization. Second, it is stated for closed systems. But in the real world there are no closed systems... So actually there is no “problem” with the second law from the beginning. Well, that is if we model systems as open, but the need of that became evident earlier. Another way we can see this is that systems flow towards “equilibrium” . The problem is that the system itself (by interacting with its environment and other systems) can change its own equilibrium. It is as moving its own attractor... not easy for me to imagine mathematically... Or if we follow the maths of Kauffman, the universe is rich enough for not being able to have time to reach any possible equilibrium.
A silly thought which keeps on coming into my head. Could we speak about a “law of conservation of entropy”? In other words, if there is some order in a part of the universe, there is an equivalent disorder in another part of the universe? This again brings the problem of how would we measure order, and second law of thermodynamics. But what if gravity could “undo” on the long run some irreversible processes, such as loss of energy (heat)? Black holes seem to accumulate it, due to their high gravitational force. Could they transform energy (which they suck) into matter? Who knows, it is just a thought.
Kauffman speaks about the lack of understanding of the link between matter, energy, and information. Information needs an interpreter. This makes it relative to the interpreter. Could we speak about “relativistic information theory”? Well, we need energy and/or matter to produce information, and also to extract it (measurement). The propagating organization of autonomous agents could be described in terms of information, and Kauffman linked it already to physics. But again, we need an interpreter/observer ... A step we are missing, I believe, is the inclusion of the observer in the definitions of organization and information. I am not sure if “second-order cybernetics” already did this, but it was going in that direction. Or should we define organization in terms of entropy, energy, and work? I believe that this would be just one type of organization. However, understanding it would be a great step in the study of life. The “blender thought experiment” makes me thing about this (put a couple of all Earth species on a blender, press MAX, and you will have the full biomolecular diversity of the planet (for a few seconds). But not the same organization)... There is something there about the concept of organization. It also reflects the importance of levels when we observe it (Gershenson and Heylighen, 2003). Because if we observe only at the molecular level there is no change in organization (until molecules begin to break apart (but even then at the atomic level the organization would be unchanged)). It is only when we observe also at the higher level that we can perceive the organizational mess which the blender caused. It seems that in the near future the specification of a context (Gershenson, 2002) will be necessary in order to do science.
Classical physics argue that the direction of time is towards thermodynamical equilibrium. We saw already that this is not the case for open systems. Then complexity scientist proposed that evolution tended towards more and more complexity, since it seems that our universe now is more complex than how it was (could we say the same after a 10 km wide asteroid hits the Earth?).
Evolution probably could be described better, not as a thrive towards complexity, but rather towards occupying all available niches. This process itself makes and destroys niches. If the universe had “begun” being complex, then now there would be simple creatures as well (prokaryotes are still around and bubbling). With few simulations we’ve made, we saw that independently on the initial conditions, a random walk in scale space yields a power law distribution after some time. So really the evolution’s arrow is not towards higher complexity, but towards diversity... with self-organized critical extinctions here and there. As Kauffman notes, there would be somewhere a balance where the probability of aggregating compensates and the probability of breaking apart. And evolution would tend towards that balance (which is diverse). This is related to the idea of a biosphere expanding into its adjacent possible as fast as it can. I feel that this is equivalent to saying that it tends to the edge of chaos. Therefore, we can also say that the direction of evolution is towards the edge of chaos (more on this discussed below).
If a niche is available at a lower level, could higher level organisms “split” and become “simpler”? Could virus have evolved like this? This is, they do require of complex organisms to reproduce, therefore we can argue that they were not around before there were complex organisms. But they are simpler than them...
Another example can be seen with koalas. They have thirty percent of their cranium empty. This is because in Australia there are no predators, so they only need to eat, reproduce, and grasp to the trees when it is windy (which they can do while they are sleeping). Having a big brain is expensive, and moreover if your diet consist of eucalyptus. Evolution found it more convenient for them to have simpler brains, because it is cheaper, and has been selecting koalas for this.
Open Ended Evolution
One of the “big” problems of artificial life, is that software programs seem to lack open ended evolution. In other words, they get stuck somewhere along the way. This seems to happen because they exploit everything which was to exploit in their simple environments. But is there really open ended evolution on Earth? As it is understood, we would get more and more complex creatures constantly (again evolution as complexification). But let us suppose that all the mass of Earth gets integrated into a superorganism, Gaia. How could you get more complex than that? I mean, planets do not breed. Moreover, how could you get more complexity in Earth if the sun was extinguished? But probably if we see evolution as diversification, rather than complexification, open ended evolution would make more sense to me...
But in any case, I agree with Kauffman in noting why computer programs seem to get stuck in their artificial evolution. They seem to prestate the rules of their game... What if programs would change their own rules? No, not genetic algorithms which mutate the mutation rate... more like programs which somehow change their own constrains... It would be more or less like a program not happy with a part of its code, and self-changing it. If it would produce open ended evolution, that is another matter. Could we see this as non-axiomatic science? In the sense that even when we begin with some axioms, the program would be able to change them.
Another issue related to artificial life is the one of the selection procedures. In nature, there are selection pressures at all levels (multilevel selection (Michod, 1999)). The problem is that in computer programs, we usually set the selective pressures and level(s). How could a program evolve to find itself involved in emergent selective pressures?
No Problem with No Free Lunch
Wolpert and Macready (1997) proved the “No Free Lunch” (NFL) theorems. From them we can conclude that, compared to all possible problem domains, we cannot say that any search method is better than any other, including random search.
We can say that any open system has constrains. Because of these constrains, not all problem domains are possible. Therefore, in practice, there is no problem with the NFL theorem. These constrains are not necessarily static, nor boundaries. They limit the space of possibilities of the system. For example, if there is a rock somewhere, it takes space. That constrains the possible problems of systems which come close to it. Since all systems are surrounded by things, these put constrains into systems, which have to choose how to search their problem domains. But these are already limited, so there is a free lunch.
Since in practice we have only some problem domains, there are some good search methods for them. In other words, there is a free lunch only in a particular context. And nature builds and selects its own contexts.
Computability of the Universe
It is clear that even if we know all the laws of the universe, and all the positions of particles, we would not be able to predict the future (contrary to Laplace’s belief). This is only because of computability issues, and sensitivity to initial conditions. We just do not have the precision to decide what would be, even if the universe “is” deterministic .
But now if we add Kauffman’s argument: we cannot prestate the configuration space of a biosphere, because evolution creates new rules, which we cannot predict (from observing atoms we cannot predict that they are part of a cow, or how they could become so). All this suggests that indeed there is something fishy with classical physics. It is just not a good model anymore (to study complex systems, to study classical systems it is great).
Finally, how could a part of the universe compute all of the universe (including itself and its calculations)? It is clear that this is not possible (unless fractals... but the universe does not seem to be self-similar in that way). It seems that there is some important things we still do not understand about the notions of information and computation...
Edge of Chaos
I believe that the concept of “edge of chaos” has been quite valuable. I recently explored the sensitivity to initial conditions of random Boolean networks (RBNs) with different updating schemes (synchronous, asynchronous, deterministic, non-deterministic), and found out that they share the same “edge of chaos” region (Gershenson, 2003). In real networks there are many factors which can affect the precise location of the edge of chaos, but it is there, and evolution can find it.
More recently I found that all types of RBNs perform complexity reduction (Gershenson, 2004), therefore, order for free.
Kauffman proposes a tentative fourth law of thermodynamics, “in which the workspace of the biosphere expands, on average, as fast as it can in this coconstructing biosphere” (p. 209). I was at first a bit puzzled by the “as fast as it can” bit, because if it goes too fast, then “good” adaptations are destroyed. Then I understood that by “as fast as it can”, Kauffman means something like the edge of chaos. Faster than that, it cannot sustain itself. Slower, not advantageous (if there is the possibility of going faster, and someone/something does, the faster ones become selected)... Life at the edge of chaos... and probably not only life, but all our universe...
As a sidenote, I find interesting that these ideas fit with the fantasy fiction of Michael Moorcock. I am no huge fan, but I do like that instead of having good against evil, Moorcock has the lords of Order fighting the lords of Chaos... with Balance above them. And as he wrote in the Elric saga, “the only truth is that of eternal struggle” (between order and chaos). This is because once you have only order OR chaos, evolution stops. The balance could be seen as the edge of chaos. Since these books were written in the 1970's (I have not read the recent ones), I wonder if somehow Kauffman was influenced by them... or probably it was one of these ideas which were “in the air”...
In the last chapter, Kauffman presents a theory where the universe, as biospheres, expand to their adjacent possible as fast as they can. The precise theory can be mistaken, but I believe that this is a great step to try to explain the universe as coconstructing. We can see a similar situation with Plato: most of his answers are mistaken, but the importance of his ideas reside on his questions. Or as Heidegger said, the one who thinks greatly, is mistaken greatly. When posing big questions you can always make big mistakes when attempting to give answers. But this does not matter, because the world can work on those big questions.
If complexity has increased since the Big Bang, what can we say about before the Big Bang? (If physicists allow the question). How did all that matter and energy got there? Now, before there should have been complexity reduction... or we could speak about conservation of entropy ??? Then “really” the complexity of the universe remains constant? I am not sure, to what extent the complexity really needs an observer. In any case, it would seem plausible to me to think that black holes could reduce complexity around them...
But if we see complexity as “edge of chaos” (an edge of chaos which can shift itself), then indeed the universe thrives towards complexity, but it always has, only that the precise edge of chaos changed and changes...
I believe that Kauffman has delivered a serious blow to reductionist science. You just cannot predict if life would emerge based on elementary particles. As Murray Gell-Mann put it, you cannot describe sheep in terms of quarks. I do think that Kauffman’s ideas are at least part of a breakthrough.
It is fortunate that Kauffman used the same title as Wittgenstein (1999), because it seems that at the end life is just a language game...
Ashby, W. R. (1962). Principles of the Self-organizing System. In von Foerster, H. and G. W. Zopf, Jr. (Eds.), Principles of Self-organization. Pergamon Press, pp. 255-278.
Beer, S. (1966). Decision and Control: The Meaning of Operational Research and Management Cybernetics. John Wiley & Sons, Inc.
Gershenson, C. (2002). Contextuality: A Philosophical Paradigm, with Applications to Philosophy of Cognitive Science. POCS Essay, COGS, University of Sussex. http://cogprints.ecs.soton.ac.uk/archive/00002621
Gershenson, C. (2003). Phase Transitions in Random Boolean Networks with Different Updating Schemes. Submitted to Physica D. http://uk.arxiv.org/abs/nlin.AO/0311008
Gershenson, C. (2004). Updating Schemes in Random Boolean Networks: Do They Really Matter? To be published in Artificial Life IX. MIT Press. http://uk.arxiv.org/abs/nlin.AO/0402006
Gershenson, C. and F. Heylighen (2003). When Can we Call a System Self-organizing? In Banzhaf, W, T. Christaller, P. Dittrich, J. T. Kim, and J. Ziegler, Advances in Artificial Life, 7th European Conference, ECAL 2003, Dortmund, Germany, pp. 606-614. LNAI 2801. Springer. http://uk.arxiv.org/abs/nlin.AO/0303020
Kauffman, S. (2000). Investigations. Oxford University Press.
Michod, R. E. (1999). Darwinian Dynamics, Evolutionary Transitions in Fitness and Individuality. Princeton Univ. Press.
Wittgenstein, L. (1999). Philosophical Investigations, 3rd Ed., tr. by G. E. M. Anscombe. Prentice Hall.
Wolpert, D.H. and W.G. Macready (1997) No Free Lunch Theorems for Search, IEEE Transactions on Evolutionary Computation, 1.