When I began studying artificial intelligence, I tried to learn what the experts were doing. I researched neural nets and read papers about the nature of consciousness. I participated in programming contests for game AI and recommendation engines. But after years of work my creations never seemed intelligent. I wanted to build something conscious; something smart in the same way that people are smart. But in that area, nobody seemed to be having any more success than myself.
I realized I was trying to create artificial intelligence without a solid grasp on what intelligence actually was. I am no outsider to this field. I have degrees in Biology and Education, so it was embarrassing to admit that I couldn't define intelligence as well as I assumed I could. I needed to take a step back and not restart until I had a concise, logical, and evidence-based definition of intelligence.
I couldn't find a well accepted definition of intelligence. Work is being conducted under many different and incompatible definitions. Intelligence researchers are stubbornly attempting to move forward without an answer to the central question of "What is intelligence?"
I paused my attempts to create AI until I documented clear, precise, and accurate definitions of all key intelligence concepts and their emergent properties. I believe my work bore fruit that I am happy to share. The final result is mostly a glossary of intelligence terms.
Why do I believe that my definitions are better than others? Because I adhered to these principles:
First, I will define simple terms such as behavior, action, goal, and benefit but please do not take these for granted. They are the foundation of more advanced terms. By the end, when I define consciousness, understanding, ideas, and other complex terms, if I have done my job well, they should seem as underwhelming and unimpressive as every other definition.
Behavior is a quality of a thing where a set of conditions initiates an action. It describes the cause and effect relation for a particular medium. Maybe a simple diagram is clearer.
Behavior: Conditions → Thing → Actions
A behavior may be intelligent or it may not be. If you push a rock, the effect might be for it to roll down a hill. This is a behavior of the rock, though it is nonintelligent. The rock has no other options to react to the pushing, nor does it have the ability to choose one.
Push a badger, and you can’t be certain what sort of reaction might occur. It is intelligent and so has the power to choose.
Let’s not call things with intelligent behavior things anymore. The ability to make choices earns them special recognition. Let's use the term creature to refer to a thing that is intelligent. This word is commonly used as a synonym for animal, but its origin is the Late Latin word creatura, which means "something created" so it is appropriate for biological and artificial intelligent things.
For a creature to display intelligent behavior, it must have a mechanism to select actions based on information from its environment. Let’s call that mechanism Behavioral Logic.
Conditions -→ Creature -→ Action | ^ V | Behavioral Logic
A brain, or working implementation of behavioral logic, can be built from any appropriate material: nerve cells, metal wires and switches, even mechanical gears and levers.
The simplest brain would be a behavior table; a simple list of behaviors.
Let’s imagine a creature with two types of sensors that can transmit either a 1 or 0. The creature also has three possible actions we will call A, B and C. Its behavior table might look like this:
Each row of this table represents one behavior. The columns are:
Situation - A group of coinciding sensory information.
Action - A physical, chemical, or electrical change.
The changes caused by actions don’t have to be external. They could instead affect other behaviors. Alternate behavior tables could be chosen if the creature is hungry, sleepy, afraid, or in any other variation of its environment. Let’s say action C causes the creature to use table 2, and action D changes it back to the first table.
Mood - A behavior table chosen as an action.
Behavior tables can be used to form complex behaviors. But how do we judge if these behaviors are valuable to the creature?
Just because a creature has a brain, there is no guarantee that it will make good choices. How can we assess if a behavior is beneficial or harmful?
Unlike physical properties such as temperature and light, there is no known measurement that directly correlates to "good" or "bad" behaviors. Benefit is relative to what a particular creature is trying to accomplish. To evaluate a behavior, we must observe if the effects increase or decrease the odds that the creature’s primary goal will succeed.
Goal - The ultimate purpose of a creature’s actions.
Webster’s 1913 edition has an elegant definition of Goal.
Goal 2. The final purpose or aim; the end to which a design tends…
Estimating a behavior’s effect on the odds of success of a goal is called behavior evaluation. If a behavior increases the chance of success, it is beneficial. If not, the behavior is harmful.
If a robot’s reason for existence is to vacuum my basement, I can assess its effectiveness by measuring dust on the floor. A vacuum-bot with more beneficial behaviors should leave less dust on the floor than one with less. Most biological organisms’ goal is to reproduce. A living creature’s Darwinian fitness is typically measured by its number of offspring.
If may not be obvious if a behavior is beneficial or not if the creature’s goal requires a long time to accomplish. The assessment of behavior may require weighing effects of multiple short and long term benefits and detriments.
Now that we have covered some basic terms related to behavior, we are ready for the Behavioral Logic definition of intelligence.
Intelligence - Choosing actions to accomplish a goal.
There are many definitions of intelligence, but this one is mine. It should describe anything with even the smallest amount of intelligence.
Where does intelligence come from? Some process must exist to create, evaluate, and modify behaviors. We will call this process learning.
Three possible ways for a creature to learn are:
The next three sections will address each of these learning processes including possible mechanisms, emergent effects, advantages, and disadvantages.
Some creatures are dependent on outside forces to evaluate and modify their behaviors. Examples of external programmers are natural selection for living organisms, and people for intelligent artifacts.
The execution of programmed behaviors does not require any thought or opinion from the creature. Programmed creatures only follow their given instructions.
Programmer - A source of learning external to a creature.
Instinct - A behavior acquired from a programmer.
Programmed behaviors are not so much learned as taught.
Learning through external programming is the simplest way for a creature’s brain to acquire beneficial behaviors.
It is possible for an external programmer to optimize a behavior for maximum benefit.
Because external programming does not require a brain to have the ability to learn, it can focus exclusively on executing behaviors.
Not only do optimized behaviors free of learning circuitry perform faster than those otherwise encumbered, programmed behavior does not require any first-hand experience. Programmed behaviors are ready to use as soon as they are made. No schooling or practice is required.
An externally programmed creature does not have the ability to handle new situations. It has no other option but to wait for its programmer to make changes. This means that behavioral logic that relies entirely on external programming is only appropriate in a single, stable environment.
Most chess programs can easily win against me, but if I change the rules so pawns can teleport to any free space and the winning condition is moving the king across the board, a human could quickly adjust their strategy and easily defeat any chess AI. Stupid computer.
And this is the best case—when the programmer is doing a perfect job. Problems can be caused by a poor programmer, and the creature is powerless to fix them.
A creature’s external programming includes all parts of itself that it cannot modify such as:
Because programmed creatures can't learn and only follow instructions, we can think of them as automatons.
Creatures that rely on an external programmer will struggle in environments where conditions change unpredictably and frequently. In chaotic environments, creatures need the flexibility to modify their behavior in response to current conditions.
A simple behavior training mechanism has these components.
Unless there is an initial randomness in behavior, there is no opportunity for a creature to explore outside of its original programming.
One way to implement blurriness is to randomly select blurry action parameters from a range of values. For example, if a creature has an action to move an appendage, a blurry parameter could be the speed of movement. It could be initially programmed to have a wide range of random values to unpredictably move slow, medium, or fast.
| P() | •••••••••••• | •• •• 0 |••________________•• 0 0.5 1.0 1.5 2.0 Speed
Some behaviors could be blurrier than others. Critical behaviors with predictable benefit should have very little variation. No creature should have to learn to breathe through trial and error.
To judge if the result of an action was beneficial or harmful, sensors shouldn’t transmit an objective measurement, but an opinion. Unlike most visual and audial information, subjective senses judge if something feels good or bad, so we call information received from subjective sensors feelings.
In the previous example, stress sensors could cause the creature to feel pain to warn of imminent damage if the appendage moves too fast. Chemical receptors could transmit a feeling of pleasure if the action resulted in touching a desirable target (food, presumably.)
One incidence of a situation, the chosen action (and its parameter values) and the resulting subjective senses is called an experience. Positive experiences should increase the probability of the same blurry action parameters being selected in future situations. Painful experiences should decrease the chance of using those parameter values again.
Here are 4 hypothetical experiences of the appendage movement example.
If we use these experiences to tune the original probabilities (decreasing the likelihood of stressful movement speeds and increasing those that tasted food) our new distribution of movement speeds should look something like this.
| ••••• P() | ••• • | •• • 0 |••••____________•••• 0 0.5 1.0 1.5 2.0 Speed
This situation is no longer reacted to out of instinct. After tuning through experience, it is now a habit.
Childhood is the period of time at start of a creature's life characterized by extreme randomness of behavior. Children are at a significant disadvantage when competing against either programmed automatons or adults with behaviors already tuned from experience. One way to protect children during this time is to use games—easier and safer versions of their environment. Puppies play by wrestling with each other long before needing to fight for their lives. Cats bring injured prey to their kittens for easy hunting practice.
Another way to overcome the disadvantages of childhood is to produce as many children as possible and hope some will survive to adulthood.
As a creature’s experience increases, blurriness decreases and subjective senses become less important. Adults with sufficient experience should behave more like automatons.
Tuning behavior with experience works, but only for behaviors that result in immediate benefit or harm. Is there a way to give our beasts the wisdom to look ahead to longer term benefit?
One way would be, instead of relying only on subjective senses to evaluate benefit or harm, to remember which feeling was associated with a past situation. That information could be used to create subjective opinions of objective situations.
|Situation||Associated Subjective Sense|
|11||pleasure (good taste of food)|
A vertebrate’s optic nerve cannot transmit pain, but if it relays a scene that resembles a previous pleasant experience, a creature could feel a similar pleasure. A chess program could examine board state and remember if a related configuration from its past was associated with a win or loss.
Creatures that form behaviors through experience can adapt to a changing environment without waiting for an outside programmer.
This also allows trainable creatures to thrive in different environments with the same initial programming. Creatures that can form habits are generalists.
There are some advantages to learning through experience, but the drawbacks are numerous.
A significant amount of time at the beginning of trainable creatures’ existence is spent choosing haphazard, possibly harmful, actions; spending time and resources on actions that do not directly further its primary goal. During this time, mistakes with possibly harmful consequences could easily be made.
Habit-forming creatures are not only threatened by physical damage, but risk behavioral damage—harmful behaviors acquired from poor training. Anomalous experiences can create idiosyncrasies—behaviors tuned to harmful instead of beneficial effect. Insufficient training can leave highly random, wild behavior into adulthood. Bad behaviors can render a creature as incapable of accomplishing its goal as one physically damaged.
Even though learning from experience is considered a "higher" form of intelligence than programmed (it requires more complex behavioral logic than simple behavior tables) it often results in behaviors that are far from maximally beneficial. The more behaviors a creature has, the more likely that some will not be adequately trained before adulthood. It should be no surprise that habit forming creatures frequently display harmful behaviors. It is hoped that the benefit of many good behaviors outweigh the effects of a small number of harmful ones.
The amount of initial random variation of a habit forming creature’s behaviors and actions should be set to an optimal amount. Too little, and a behavior is indistinguishable from programmed. Too much, and the search space is too large to be tuned in time for adulthood.
Subjective Senses are only a rough estimator of benefit or harm. Due to inherent inaccuracy in behavior evaluation, tuning to subjective sensory information may not actually be increasing the odds of success of their primary goal.
The problems associated with learning through trial-and-error are numerous and devastating. Why would any creature want to accept the risk, energy and time investment for mediocre behaviors? There is only one explanation: The benefit of habit forming is greater than the harm.
Habitual learners are much more than automatons. They feel pleasure and pain. They remember their experiences. They can form long term strategies by associating objective patterns with failure or success. They deserve a better name.
Beast - A creature that can form habits.
It’s not flattering, but better than automaton. From Webster’s 1913 edition:
Beast 3. As opposed to man: Any irrational animal.
The major drawback of programmed behaviors is inflexibility. The time external programmers require to make changes can be too great for creatures living in frequently changing or a variety of environments.
Training overcomes this limitation, but at great cost of time, energy, and risk to acquire sub-optimal behaviors. What habitual learners need is a way to evaluate the benefit of new behaviors quickly and safely. They need the ability to imagine a situation and predict the benefit of an action.
A good resource to predict the effects of actions would be a collection of how situations were observed to change after an action was performed. This is science in its most primitive form.
|Initial Situation||Action||Consequent Situation|
|10||move toward smell||11|
Each row of this table is a belief that can predict the consequences of an action. A creature's understanding of its environment is a measure of how complete, accurate, and efficient its beliefs are.
This is probably a good place to reiterate that my examples are meant to be the simplest I can imagine. Their purpose is to show that an implementation is possible, and to help illustrate the concept. I assume that real working implementations will be more sophisticated to optimize for effectiveness and speed. These advanced versions should still follow the same basic principles and, more importantly, appear to function the same to anything outside of their black box.
A thought is the conversion of a situation and a proposed action into predictive subjective sensory data. It starts with an idea for a possible action. This proposed action and the current situation can be used to locate the corresponding belief in the beliefs table. From that, we can predict a consequent situation.
Situation Idea \ / V V Beliefs | V Prediction | V Opinions | V Judgment
Once we have a predicted situation, we need to judge if that situation puts the creature in a beneficial or harmful position. Fortunately, it should already have an opinions table that correlates situations with a subjective sensory measurement. It can repurpose this table to judge how good or bad the results of their actions should be.
But what if the simulated feelings were painful? What should be done then? We still need a way to use thoughts to choose the most beneficial actions.
Scenario <-- (New) Idea | | V | Simulation | | | V no | Decision ------' | | yes | V Act
After a simulation, a decision must be made to either stop thinking and act, or consider a new idea. One reasonable way to decide is to choose the first action that exceeds a certain benefit threshold.
With a luxury of time, a creature could continue to think about situations even after a beneficial action is found to search for an innovative solution of superior benefit.
If the situation is urgent, the most beneficial (or least harmful) idea could be chosen after a short time threshold.
In dire emergencies, a gut reaction could skip the entire thinking process to quickly act out of habit.
All that thinking won’t do any good unless the creature can come up with an idea that is beneficial. How are new ideas produced, and what can be done to choose good ones quickly?
If the creature is already trained by experience, it has a habit ready to use as the first idea. So that’s one idea, but what if that action simulates painful feelings? Variations in proposed actions can be made in the same way as the blurriness of trained behaviors.
If minor variations all evaluate poorly, it may have to search for ideas that are significantly different than its current habits and instincts. Random actions and large variation of action variables are possible ways to guess. Imagining what would happen if nothing was done isn’t a bad idea either.
If a creature has sufficient understanding of its environment, it could also get ideas by imitating behavior observed in others.
A simulation does not have to end once an action is chosen. Instead of acting on the decision, a creature could instead start a new simulation using the prediction of a previously decided action, and then think about the best action in the ensuing scenario. It could then create a new scenario from the results of that decision, and so on.
Plan - A sequence of actions (steps) produced from a chain of simulations.
For important decisions and the luxury of time to think, successive simulation can be used to generate plans. These plans can then be executed by performing the series of actions in immediate succession without the need to stop and think after each action. (As long as the intermediate result of each action matches the predicted outcomes of each step of the plan.)
Conscious creatures can evaluate the benefit of behaviors faster and with less risk than beasts. They are better at handing novel situations and can explore a wider variation of behavior without risk of harm.
Learning from simulations adds significant complexity to behavioral logic systems, increasing both the time and energy required for the execution of behaviors.
Because of increased complexity, more can go wrong. Here are a few possible problems.
A misfortunate creature could have inaccurate beliefs. This produces decisions that are imagined to be beneficial, but are disastrous in practice. As Yogi Berra said, "In theory there is no difference between theory and practice. In practice there is."
There is no known way to generate ideas that guarantee at least one is beneficial in finite time and certainly not in the brief time behaviors require for the creature to accomplish something during its life span. Producing ideas can be pure guesswork.
When to make a decision may be the step of the thinking process most fraught with peril. Indecision can lock thinking into a never ending loop. Premature conclusions choose the first barely acceptable action, even if one significantly better could be discovered with a little more thought.
No wonder people have so many psychological issues.
I gave names to programmed and trained creatures (automatons and beasts) but not conscious ones. This is because I am not certain if only beasts can be conscious, or if it is possible to have conscious automatons. I feel strongly that consciousness as described requires underlying training logic both for idea generation (from previous experiences) and evaluation based on subjective senses. I don’t feel strongly enough to declare it for certain. I will leave the possibility of conscious automatons as an important outstanding question.
I have finished describing Behavioral Logic, a comprehensive theoretical model of intelligence and the result of several years of personal research. If I was not persuasive, that’s OK. I am sure I could have written more clearly and there are probably important ideas I have missed. I have no desire to use rhetorical tricks to deceive anyone into accepting my beliefs if they are unworthy. If I have persuaded you, thank you for reading with an open mind. Either way, the path forward is the same: try to prove me wrong. If what I have described is valuable, then robots built using the concept of behavioral logic should perform better than those that aren’t, and intelligent biological organisms should have analogous processes in the structure and function of their neuroanatomy. I am excited to start looking, and hope you join me.
I have mentioned building artificially intelligent robots (and intend this document to be a high-level specification for such) but I do not believe that the greatest possible benefit to a thorough and accurate understanding of intelligence is a robot butler (as cool as that is.) The greatest promise is a deeper understanding of ourselves. Powerful, simple, and accurate models of behavior would be a breakthrough in the study and treatment of mental illness and for getting the most from our greatest asset: the most powerful brains on the planet.
If we want to do good for ourselves, our families, our friends, our communities, and our world, we must take advantage of our intelligence. We should do all we can to make the best decisions, and our leaders should supervise an environment that nurtures our minds. If we have any hope of making our world the place that we dream it could be, we need to think.