Carlos Gershenson's homepage




1. Behaviour-Based Systems



"... they have been created for life,

not for thinking!"

--Hermann Hesse





bears.jpg (181636 bytes)

"Polar bears". Carlos Gershenson, Mexico City, 2000. Ink on paper.



Before we start with the description of behaviour-based systems, we will set a background in artificial intelligence, of how, why, and from where is that behaviour-based systems come from. We also address some problems present in artificial intelligence, such as the concept of intelligence, and the capabilities of intelligent machines. Then we describe behaviour-based systems. Finally, we mention some of the applications of behaviour-based systems.



1.1. Background



"The hardest thing to understand is why we can understand anything at all"

--Albert Einstein



From the beginnings of Artificial Intelligence (AI), in the late 1950's, researchers in the area have tried to simulate human intelligence by representing knowledge. Knowledge representation is among the most abstract ways of exhibiting intelligence. It is also among the most evolved ways of exhibiting intelligence. This is, animals less evolved than humans might exhibit intelligence, but not at a knowledge level (1). This has lead researchers simulating intelligence to create the so called knowledge-based systems (KBS). Thus, KBS are tailored for the simulation of the most abstract elements of thought (reasoning, for example). This has lead KBS to be very effective in simulating abstract ways of exhibiting intelligence, by successfully demonstrating theorems, solving problems, playing chess, etc. Essentially, simulating things which were "very difficult", from an intellectual point of view. KBS were good at where the people who gave the knowledge to build the KBS were good at. But it came that it was very difficult for KBS to simulate successfully "very simple" things, also from an intellectual point of view; activities such as walking in crowded corridors, cleaning, parking a car, etc. Basically, things we do subconsciously, without any intellectual effort, but require a lot of coordination, and complex interaction with an open environment. It was clear that modelling "simple" intelligence from "abstract" intelligence was neither easy, nor computationally efficient.

So, by the middle 1980's, researchers in AI realized that the "simple" intelligence they were trying to model was present in animals, in their adaptive behaviour (McFarland,1981; Beer, 1990; Pfeifer and Scheier, 1999), which is studied by ethology (Manning, 1979; Tinbergen, 1951; Lorenz, 1981). Animals perhaps cannot play chess successfully (Brooks, 1990), but it seems that it is very easy for them to search for food if they are hungry, organize in societies if they need it, run away if they perceive a predator, etc. In general, animals can react, and adapt, to the changes in their dynamic environment. This behaviour, for an observer, appears to be intelligent (2). From this perspective, researchers began to model intelligence based on behaviour, instead of on knowledge, creating the so called behaviour-based systems (BBS) (Brooks, 1986; Brooks, 1991).

Figure 3 shows a diagram of the issues discussed above. We perceive natural exhibitions of intelligence (i.e. what we judge to be intelligent), and then we model it in a synthetic way (Steels, 1995; Verschure, 1998; Castelfranchi, 1998). Our synthetic theory will help to explain our perceptions if it is capable of reproducing what we perceive. For example, we will know more about how language works if we build an artificial language module, or we will understand more about perception if we engineer a robotic vision system, instead of "just" making theories of them. We will understand more about intelligence as we build artificial intelligence. This "synthetic method", as described by Steels (1995), is different from the "inductive method". The inductive method observes facts, then makes a generalization or abstraction, to develop a theory. The theory is used to predict facts, which are verified against observed facts, which falsify or justify the theory. A diagram of this method can be seen in Figure 4. The synthetic method, also generalizes or abstracts observed phenomena to produce a theory. Only that this theory is used to engineer an artificial system, as a substitute of the natural system. This artificial system is operated, and its performance is observed, which falsifies or justifies the theory in dependance of how similar the observed performance is to the observed facts. A diagram of this method can be seen in Figure 5. The idea of this method, is to build a "parallel", or artificial system, which should behave in a similar way than the natural system in which it was inspired. If it does, it helps to comprehend the real system.





In Figure 3, we can see that KBS are mainly synthetic theories of cognitive processes. KBS have not been able to model successfully adaptive behaviour (Maes, 1993; Clark, 1997). BBS are mainly synthetic theories of adaptive behaviour. At this moment, it has not been possible to model cognitive processes from BBS, but it seems that this, once achieved, would allow to simulate all ranges of human intelligence: from adaptive behaviour to cognitive processes (Castelfranchi, 1998). This is because such a system would evolve in a similar way to the way our capability for thought has. One way to achieve this would be by learning patterns from behaviour. After learning patterns, concepts would have to be learned, in order to learn a language. This can only be made in a society, because an individual can perceive himself only in his similars. After the language, a logic should be learned (abstracted) from the language. All this processes should be emergent. Once at a logic level, cognitive processes should be successfully reproduced. We would have a behaviour-based cognitive emergence.



       



KBS are also known as "top-down" systems (Maes, 1993), because they are (in most cases) designed and constructed from the whole to the parts. This means that from the beginning of the development, we should have a complete idea of what the system should do. The problem domain should be defined and limited. The control is (usually) centralized. BBS are also known as "bottom-up" systems (Maes, 1993), because they are designed and constructed from interacting parts (usually autonomous agents (3)) that together make the system functional (also in most cases). This approach allows an incremental development of the system, and also a "graceful" and robust degradation when some parts of the system fail, or are removed. This also allows BBS to respond to open, incomplete, or unknown problem domains, allowing flexibility in the case of unexpected events. In BBS the control is (usually) distributed. A useful comparison between the advantages and disadvantages of KBS and BBS was made by Maes (1993).



1.1.1. What do we understand for intelligence?



"Intelligence is given when in the mind there are two contradictory thoughts. One proof of it is that mankind knows that it is lost, and although, it does everything it can to save itself."

--Scott Fitzgerald



We could do as Sir Isaac Newton, when he was questioned about the definition of time, movement, and space: "I do not need to define them, for they are well known of everyone". We could say: "We all know what intelligence is, we use the word every day, so why should we spend a whole section on trying to define it?". We will not give a formal definition of intelligence. We will give a notion of what we understand when we say "something is intelligent", so at least we know in what context we are talking about intelligence.

This is Dr. Mario Lagunez's definition of intelligence: "In order for us to say that something is intelligent (a person, a robot, a system), first, he/she/it must perform an action. Then, a third person (an observer) should judge if the action was performed intelligently or not". We agree with this definition, which is similar to Turing's (Turing, 1950). Not only it describes what we can understand for intelligence, but also what the problem is when we try to define intelligence. The problem is that, the judgement of intelligence depends entirely on the observer's criteria. For example, we believe than a creature, natural or artificial, able to survive in his or her environment, is intelligent (of course there are different degrees of intelligence). This obviously changes from observer to observer, so about the same action, one observer might say that the action was intelligent, and another that it was not. So, the first definitions of intelligence sticked to the criteria of the definer of what he judged to be intelligent. And people with different criteria would disagree with this definition of intelligence.

Abstract concepts, as intelligence, cannot have a concise, unequivocal definition. This is because abstract concepts are applied in many different situations. So, we take a similar stance as Metrodorus of Chius had with his phrase "all things are what one thinks of them". We say: "Intelligent actions are the ones people judge to be intelligent".

Intelligent is an adjective useful for a post hoc clarification of a behaviour. In describing an intelligent system, it is more important the action (the what) than the functioning of the system (the how) (4). Of course, the more we understand about intelligence, the clearer the notion we will have of it (Steels, 1996).



1.1.2. Will machines be able to have the same, or more intelligence than humans?



"...even if these artifacts (machines) perform certain acts better than us, they would do it without the conscience of them... ...it is morally impossible that a machine will work in all the circumstances of life in the same way as our reason makes us work".

--Descartes.



One of the main objectives of classical AI was to develop machines with the same, and superior intellectual capabilities as the ones we have. After more than forty years, this still seems not near, and some people believe it will never be.

One of the strongest arguments against this was the so called "Chinese room problem" (Searle, 1980): We set an Englishman which does not know Chinese, in a closed room, with many symbols of the Chinese language, and a book of instructions in English of how to manipulate the symbols when a set of symbols (instructions) is given. So, Chinese scientists will give him instructions in Chinese, and the Englishman will manipulate symbols in Chinese, and he will give a correct answer in Chinese. But he is not conscious of what he did. We suppose that a machine behaves in a similar way: it might give correct answers, but it is not conscious of what it is doing.

Well, according to what we stated about intelligence in the previous section, we could judge that the consciousless answer was an intelligent one. But let us discuss about consciousness. We can drive a car without being conscious of how the engine works. We can use a computer without knowing anything about electronics or microprocessors. We can live, without knowing what life is. We can love, without knowing what love is. And, we can think without knowing how our minds work. So let us apply the Chinese room problem to ourselves. How can we think if we do not know how we think? We think without the conscious of how we do it. We are conscious of what we understand and we are able to explain and predict. A machine has no reason for not being able to do the same thing. We, and machines, cannot be completely conscious because we would have to know everything. So, we can say that men and machines have certain degrees of consciousness. At this point, men have higher degrees of consciousness than the ones of machines (even to play chess?).

Many people think that a machine cannot be more intelligent than the one who created it. How can a student be better than his teacher, then? Well, many people thought that it was impossible for men to fly, or to go to the moon, or that a machine will ever play chess better than men. It is possible, indeed, to create machines more intellectually capable than a single man. And, there are several ways to achieve this. For example, a multi expert system can contain knowledge of many experts of different areas (e.g. González, 1995). Perhaps it will not know more about a speciality than each expert which knowledge was used to create the system, but it will have a much more general vision of a problem because of the knowledge of the other experts. So, by aggregation of knowledge, a machine might be more intelligent (and more conscious) than a single human.

If we "teach" (program) a machine to learn, it could learn its way to be more intelligent than the ones who "taught" it to learn, the same way as a child learns his way to (perhaps) be more intelligent than his teachers (of course "one could not send the machine to school without the other children making excessive fun of it" (Turing, 1950)). This would be learning of knowledge.

We could also attempt that machines might reach the capability of learning by themselves, in a similar way as we did. This is, by evolution of knowledge. Machines might evolve themselves into beings more intelligent and more conscious than men, improving from generation to generation (always depending in what we understand for intelligence and consciousness). Evolution, natural or artificial, is a slow (but sure) process, because it requires of experimentation of how suited are individuals in their environment, and how they might change, in order to improve without losing their useful capabilities. In any case, artificial evolution would be not as slow as natural evolution, because it can learn from its mistakes (in natural evolution the information of dead animals (some of which might have been mistakes) is lost), and it can be directed (natural evolution seems to have no goal). But nevertheless, this would take a lot of time (5).

Should we panic? Not yet. The information contained in one cell cannot be contained in a normal computer. Other issue is that we should not throw away millions of years of evolution, and start from zero. Genetic engineering and genetic computing might allow that we will produce "machines" "better" than humans, basing ourselves in humans.

Will machines make us prescindable, and will they do with us what we did with god? (6) Perhaps, but, as Nietzsche stated, our goal is to create superior beings than us. He meant about our children, but our machines are also our creation. In other words, it is our nature to create superior beings. If this also includes our extinction, it does not matter. We are finite anyway.



1.2. What is a Behaviour-Based System?



As we said in Section 1.1, behaviour-based systems (BBS) are inspired in the field of ethology, which is the part of biology which studies animal behaviour (Manning, 1979; Tinbergen, 1951; Lorenz, 1981; McFarland, 1981). This is because many properties desirable in autonomous intelligent systems are present in animal behaviour: autonomy (self-control), adaptation to changes in the environment, learning, situatedness, goal-directedness, and persistence, among others.

We can say that the goal of a BBS, is to offer the control (cybernetics (Wiener, 1948)) of an autonomous agent. The term agent (7) has been used in a wide variety of contexts. For us, an agent is a system that has goals to fulfill. An agent is within an environment, which may be dynamic and complex. An agent is said to be situated in his environment if he can perceive it and act upon it. Examples of agents would be robots in a physical environment, software or interface agents in "cyberspace", and agents that inhabit simulated environments. An agent is said to be autonomous if he can determine by himself his own goals. If the autonomous agent is able to adjust his goals in terms of what he perceives in his changing environment (his beliefs) it is also said to be adaptive. If this adaptation is opportunistic, we can say that the autonomy and the adaptation themselves are of a higher order: intelligent.

We can find three basic types of adaptation in an adaptive autonomous agent (AAA) (Meyer and Guillot, 1990):



The main problem to be solved for building a BBS is: "to come up with an architecture for an autonomous agent that will result in the agent demonstrating adaptive, robust, and effective behaviour" (Maes, 1993). We can find that there are many subproblems to be solved in order to solve the main problem:



If the BBS consists of a society of agents, we may have more subproblems:



BBS often present emergent properties. Emergence in BBS will be discussed in Section 2.1, after we state some notions about complex systems and emergence.



1.3. Some Areas of Application of BBS

BBS may be applied in a wide range of fields. Wherever a control system is needed to take quick adaptive decisions, a BBS may be used. In the following sections, we will describe its applications to robotics, software agents, artificial life, and philosophy.



1.3.1. Robotics



"If we consider the (human) body as a machine, we shall conclude that it is much more ordered that any other; and its movements are more admirable than those of machines invented by man, because the body has been made by God."

--Descartes



Robots which have specific functions, like the ones which work in manufacturing plants, are more or less fully developed. We mean that people have a clear idea of how to build them. This is because they are rather simple. They are rather "dumb". But what about mobile autonomous robots, which develop in a real and dynamic environment, that have many goals and must take decisions (8)? Researchers in AI have been building them for a long time, but they still do not fulfill all the requirements that are desired in them. But indeed there has been a great improvement in the design and building of these robots. We can say that the evolution of robotics is at an insect level. We can successfully imitate an insect's behaviour (Beer and Chiel, 1993; Brooks, 1993).

Most researchers began to build robots using knowledge representations (e.g. rules). But they easily malfunctioned and hardly achieved their goals. Since the properties desired in these robots were present in animal behaviour, researchers began to model and imitate this behaviour. This was one of the main reasons of the development of BBS.

Examples of these robots are: Herbert (Connell, 1989), a robot which goal was to collect empty cans around the MIT Mobot Lab; Periplaneta Computarix (Beer et. al., 1992; Beer and Chiel, 1993), a robotic cockroach, inspired in the neural circuits of the American cockroach; Kismet (Breazeal, 1999), a robot for social interactions with humans; and COG (Brooks, 1993), a humanoid robot capable of restricted communication with humans.

Other applications of these robots include extraterrestrial exploration, where the robots must have some autonomy, due to the time that a signal from Earth takes to reach the robot. Also a great deal of research has been put into robots which play soccer, and the organization of the Robocup has stimulated this research. Robots have been also developed for submarine exploration, bomb deactivation, and entertainment (i.e. toys).

While building robots, several problems must be solved, such as perception, action, and decision. The decision part is solved using BBS when the robot will have to take quick decisions, be situated in a dynamic environment, and perhaps learn from its experience. But also the motion and perception tend to be biologically inspired.



1.3.1.1. Why do we build intelligent robots?



Or what for, do we build intelligent robots? Some people might answer:

Perhaps all of them might be applied in some cases, but none is the main reason to develop robots.

We would agree that robots are built in order to develop synthetic theories of animals and humans (Steels, 1995). By building robots, we understand how do the processes of perception, action, decision, integration of information, and interaction take place in natural systems. If the robot has no usefulness per se, it does not matter. Robots in AI are not built mainly to be useful. How useful is a twenty-thousand-dollars robot that knows how to go for coffee? The point is to understand how are we capable of doing so. Of course, once you know the rules of the game, you can change them.



1.3.2. Software agents



"Agents are objects with soul."

--A. Guzmán Arenas



Software agents have been inspired in AI and in the computer sciences' theory of objects. We can say that they are programmes with agent properties. There are many definitions of software agents, and some authors may have weaker or stronger notions of an agency (Genesereth and Ketchpel, 1994; Wooldridge and Jennings, 1995; Russell and Norvig, 1994; Gershenson, 1998b). Since there are a wide variety of definitions of software agents, we will give a loose definition.

A software agent is a programme that has some autonomy. He is in an environment, which he might perceive and act upon. He has a specific goal or function, which he will try to complete. He might interact with other agents, which will make him social. Examples of agents may go from a UNIX daemon, to a spider (an agent who crawls in the web); from a computer game character, to a personal digital assistant (PDA). Agents might be embedded in other agents.

From the computer science perspective, a software agent is a component: an object (Booch, 1994) which might communicate with other components.

Since the tendency in software systems is to develop them in a distributed way, agent theories are having a great influence on computer science. For example, with agent-oriented programming (Shoham, 1993) and agent-based software engineering (Wooldridge, 1997; Jennings, 2000). Because of the properties of agents, and also because of the needs of the market, the software industry is already moving from the object paradigm to the agent paradigm.

The SWARM Simulation Environment, developed at the Santa Fe Institute, is a wonderful agent-oriented approach for building simulations, which extends software agent theory itself. It would be not surprising that the SWARM ideology would be used in the near future for every kind of software development, although it was designed with simulation purposes, including artificial intelligence and artificial life.

It is clear that, since autonomous agents have to take decisions, BBS are linked to systems using software agents. Advances in BBS will influence software agents, and vice versa.



1.3.3. Artificial life



"Ninety percent of life is just being there"

--Woody Allen



Artificial life (Alife) simulates life, in a similar way that AI simulates intelligence. Alife is a synthetic representation of life. Since we perceive intelligence in many living organisms, AI and Alife are closely linked, and sometimes overlapped.

Since BBS are inspired in animal behaviour, we could say that all BBS are included in Alife.

Perhaps we could roughly distinguish the research done in Alife and in BBS. Alife has studied more social behaviour (e.g. Reynolds, 1987) and evolution (e.g. Sims, 1994), while BBS have studied more adaptive behaviour. Of course they have overlapped.



1.3.4. Philosophy

"How can we ask ourselves how can we ask ourselves?"



BBS have lead researchers and philosophers to propose theories of "how the mind works".

One example of this is Marvin Minsky's Society of Mind. He sees the mind as a society of non intelligent agents, from which intelligence emerges (Minsky, 1985).

Another example is the theory proposed by Andy Clark. By studying BBS (which have studied ethology), he has seen that the mind is not isolated from the body nor from the world. That our mind is distributed in our brains, bodies, and worlds (Clark, 1997).

As we explained in the beginning of this chapter, with the synthetic method, by building artificial systems, we can understand the natural ones. As AI develops, it affects more advanced philosophical concepts and their relations, such as self, reason, beliefs, truth, and being.



1.4. About BBS

"If something has an explanation, you can explain it.

If it has no explanation, you should explain why you cannot explain it"



Behaviour-based systems are a promising paradigm for understanding intelligence, in animals and humans, and for developing systems simulating such intelligences. If we want to understand the evolution of cognition, this is, how animal intelligence evolved into human intelligence, we also need to address other issues, such as culture, language, and society. In the next chapter, we will make a brief introduction to artificial societies, which, in a similar way than BBS, are assisting in the understanding of social processes.

 


1. Recent studies show that animals are capable of exhibiting simple forms of knowledge (e.g. congo parrots (Pepperberg, 1991)), but these issues were not considered by AI researchers in the middle of the twentieth century.

2. Section 1.1.1. addresses our concept of intelligence

3. We give a notion of the concept of "agent" in Section 1.2.

4. Some people (Marvin Minsky and Lynn Stein, for example) do not care about the how at all.

5. This issue came from discussing with Marvin Minsky and Push Singh.

6. This issue was introduced by Fernando Contreras.

7. In English, people used the pronoun "it" for animals when they considered that they had no intelligence. Since the paradigm of behaviour-based systems consists precisely in assuming that animals are intelligent and in building intelligent systems inspired in animal intelligence, researchers often call animals, agents, robots, and animats with the pronouns "he" or "she". We will refer to agents and animats as "he", because we consider that, although they exhibit intelligence, it is low enough to be considered as masculine.

8. From now on, we will refer to this type of robots just as "robots".


Main    Contents    Prev    Next

Carlos Gershenson