Behavioral Logic Software Architecture

Software Specification for Generic Intelligence

Each intelligence project will be different depending on the hardware available for senses and actions, and the specific goal of the AI. This document describes an architecture for general use in any intelligent software.

Every member of your team should be familiar with the foundations of behavioral logic. A solid grounding on the basics of intelligence is valuable for design and implementation of AI, and a common vocabulary will help team discussion and coordination.

Diagram Key

The diagrams included to illustrate relations between modules have a few conventions.

Modules

Any word (or short word phrase) should be considered a module. Modules will be capitalized unless they are merely a persistent data structure without processing capabilities. The primary modules will be in ALL CAPS.

External Input and Output

Rounded rectangles represent hardware components of the artifact that interact with the external environment. The ones shown are not comprehensive requirements, but only to illustrate a few key interfaces between the hardware and software.

Branching/Decision

A diamond represents a standard flow chart decision.

Interfaces

Interfaces between modules are represented by arrows. The direction of the arrow indicates the flow of data from source (without arrowhead) to destination (arrow end.) There is also meaning ascribed to open and filled arrow heads, and single or double lines.

-----▷ Single line, open arrowhead (read data)
-----▶ Single line, filled arrowhead (alter data)
=====▷ Double line, open arrowhead (init process)
=====▶ Double line, filled arrowhead (pass execution)

Single Line, Open Arrowhead

Read data

This indicates data reading only. It can represent retrieving data from a persistent data structure or the return value of a function call with no side effects.

Single Line, Filled Arrowhead

Alter data

This arrow represents the ability to modify—add, change, and delete—persistent data. It does not guarantee that same data can also be read.

Double Line, Open Arrowhead

Initialize process

This arrow indicates the ability to trigger a process. Data can be passed to initialize the new process.

Double Line, Filled Arrowhead

Pass execution

This the same as the double-line-open-arrowhead, but it also halts the parent process.

First Decisions - Goal and Learning Type

The software modules described below are generic. They are designed to be a starting point for many types of goals and hardware. The critical piece that the leader of an AI project must bring is the creature's primary goal.

Primary Goal

This single goal will affect all other features (actors, sensors, programmed behaviors, etc.) so must be decided first and in the simplest manner possible. For example:

  • Bring plastic bottles to a specified receptacle.
  • Destroy Chaetocnema pulicaria (eggs, larva, pupa, and adult corn flea beetles) in a specified area.

Note that these are not worded in this manner

Find plastic bottles in a 1 square kilometer area surrounding an initial base then acquire the bottle for transport to a container at the central location in an optimally efficient manner while avoiding any human contact. The robot should monitor its power level and when low, return to the charging base until its cells are fully recharged. It should also respond to a signal initialized by the person in charge of the robot to immediately return to base.

All of those things may be great ideas but implementation specifics do not belong in a statement of primary goal. Also, do not bother specifying that an AI should operate in an efficient, optimal, fast, good or another vague type of positive manner. Maximum benefit (improving the odds of accomplishing the primary goal) is implied and should naturally improve as intelligence increases.

Programmed, Trained, or Simulational

If you are familiar with the foundations of behavioral logic, then you know that there are three types of learning: programmed, trained, and simulational. If you didn't know that, let me give you a quick summary. The simplest method of acquiring and improving behavior is being programmed by an external process. Learning through training includes random variations to programmed behaviors that are tuned from first-hand experience. Simulational learners possess a model of their environment that can be used to predict the results of actions before choosing a behavior.

Each one of the three types of learning depends on the effective implementation of the ones before (trained learning requires good programmed behavior, simulational learning requires both programmed and trained implementations.) No matter how intelligent the robot will eventually be, it is recommended that the project initially focus only on programmed learning. All software modules are designed to be reused with higher levels of learning. Almost no work developing basic learning types will need to be discarded when implementing more complex learning at a later date. Developing programmed learning first will deliver the most benefit with the least effort and may be sufficient to accomplish the primary goal without more advanced intelligence.

The Brain

The brain is the main module where all other modules are initialized and connected. The primary intelligence modules are Senses, Actions, and Behaviors. There may also be hard-coded logic in external Sensors and Actors.

      SENSORS
         |
         |
         v
      SENSES

Sense state   mood
    |          ^
    |          |
    v          |
     BEHAVIORS
         ||
         ||
         VV
      ACTIONS
         ||
         ||
         VV
       ACTORS

This basic overall structure will be the same no matter if the intelligence is programmed, trained, or simulational. The brain module is also the level where development utilities suck as logging, visualizations, manual controls, and simulated environments are integrated with the main intelligence modules.

Primary Module Descriptions

Detailed specifications of the main modules will be the purpose of the rest of this document, but first, here is a high-level summary.

Sensors

Sensors are hardware meant to collect information from the environment. Little processing occurs here and sensor values can only be requested by the senses module. No other module is allowed access to sensors, nor are sensors allowed to "push" data to the senses module.

Senses

The senses module's job is to collect information from sensors, interpret it, then make those interpretations available to behaviors and actions. Its hardest work is perception: the extraction of meaning from raw sensory data. It is the responsibility of the senses module to convert the huge amount of mostly irrelevant sensory data into the simplest representation required for behaviors.

Behaviors

The sole function of this module is to read sense state, then choose an action. It has no control over available perceptions and actions. This is simple for programmed intelligence, but includes training and simulation abilities for higher level intelligences.

Actions

The actions module's job is to manage and coordinate all actions. In creatures with few actions, this could be very simple, but for complex synchronized maneuvers there could be a lot going on here.

Actors

Actors are hardware designed to manipulate a creature's environment and itself. These actions can only be controlled by the actions module.

AI Programming Tools

Developing intelligent hardware can be slow and complex. The speed of development can be significantly improved with a few basic utilities.

Viewer

A UI should be available to monitor sense state and their effects on behaviors in an intuitive fashion. There should be an interface to adjust tunables, modify the behavior table, and manually perform actions.

Tunables

There are many values in an intelligent process that should be adjusted for the best operation of perceptions, detectors, actions, etc. They should be able to be dynamically adjusted in the viewer. Blurry tunables will also include a deviation to define the range of randomly generated values.

Manual Mode

If this is set to true, the behaviors module is bypassed. A user interface is available that monitors perceptions and displays them both as raw data and as first-person visualizations that can be naturally understood by a human pilot. Controls are exposed in the UI to send commands directly to the actions module. Manual mode should be able to be toggled on and off dynamically.

Game Mode

If this command argument is set to true, sensors are replaced with fake data. The behavior module sends its action requests to a game actions module.

Game and manual modes can be set independently. It is fine (and encouraged) to run manually during a game. Though manual mode can be toggled back and forth at will, game mode makes sense to only be set at initial startup.

An advanced feature of a game environment (after being an accurate replication of the robot's target habitat) is the ability to control time. Simulation speed should be able to decrease to fine-tune precise actions, or increase to quickly test the long-term effects of behaviors. It should be able to rewind to an earlier state.

Programmed Intelligence Architecture

Programmed creatures are not capable of learning from experience or predicting the consequences of their actions. Still, effective and complex behaviors can be created through programming and a solid programmed foundation is a prerequisite to more advanced behavior.

This diagram is an overview of all required modules for programmed intelligence.



Primary Module: Senses

The first primary module is senses. It should only concern itself with collecting and analyzing information about the environment.

Senses Interface: Output: Sense State

Sense State is data that should be conveniently readable by all modules, but only writable by Senses. This is the senses main interface and will account for the vast majority of communication to other modules.

Senses Interface: Input

It is uncommon for other modules to send information to the sensory module, but it is possible. Situations may arise where increasing the timeliness or accuracy of attention and data is advantageous.

Mood information (persistent situation information) in sense state should be settable by the behaviors module.

Information about the action currently being performed should be kept track of in sense state, and the actions module should be allowed to set it.

Private Modules

Observers are in charge of reading hardware sensors and converting their signals into usable information privately available to other parts of the sense module. Observers do not decide when to do this, but are triggered by the attention controller.

Perceivers are where the main computational power of the senses module resides. They sift through raw sense state to remove background noise, find patterns, and otherwise reduce a mass of measurements to concise and useful information. They are also beholden to the attention controller.

The attention controller establishes the frequency of observers and perceivers. It can contain its own logic to change how senses are updated depending on mood.

Detectors are custom logic that use perceivers to determine if a particular property of the environment is found. They many only return a value of true or false and are the direct connection to behavior table situations.

Sense State is a dumb data store tasked with making sensory information available to all intelligence processes. It can store and make accessible perceptions, detectors, moods, and current actions.

Primary Module: Behaviors

This module converts information from sense state to an action.

Behaviors Interface: Output: Action Dispatch

The most common information sent from the behaviors module is a request to perform an action to the actions module.

Behaviors Interface: Output: Mood

The behaviors module can also add moods to sense state.

Behaviors Interface: Input: Sense State Detectors

The only external information that behaviors should receive is requesting detector value from sense state.

Internal Structure

The entirety of a programmed creature's behaviors come from a behavior table: a collection of situations tied to actions (in preferential order.) There also needs to be a situation monitor to initiate behavior table lookups.

Behavior tables usually don't initiate simple actions. Complex actions can vary greatly depending on the current situation, and there is no need to clutter the behavior table with this logic. So instead of an action, behavior tables output a response. This is a key to the responses object that is a collection of methods to send an action type with custom parameters to action dispatch based on current sense state.

Primary Module: Actions

The actions module should not only trigger the creature's abilities, but also coordinate complex actions and handle when to interrupt or ignore action requests.

Actions Interface: Output

The only information that actions should communicate to the other modules is updating the sense state with information about the current action.

Actions Interface: Input

All requests to initiate an action are listened for in a single, standard input we will call action dispatch. This is the main form of communication to this module and should account for > 99% of the activity.

Internal Structure

Action Dispatch

An action dispatch module listens for action requests and is responsible for setting the current action in sense state.

Performers

These are processes that are in charge of initiating simple actions.

Maneuvers

These methods orchestrate complex actions. They read sense state and output to performers.

Modules for Trained Behavior

Trained behavior includes all parts of the programmed diagram, plus these additional parts added to the behaviors and senses modules.

  • Blurry Tunables in behaviors
  • Feelings from subjective senses
  • Tuning behavior variation


These three modifications allow basic habit-forming behavior. But to get the most out of this type of learning, we need a mechanism for classical conditioning. This can be accomplished with opinions. You can think about opinions as correlated subjective sensors.

Blurry Tunables in Behavior

Blurry tunables in behavior is the least clear and most unexplored requirement for habit forming intelligence, but it is also the prerequisite for the others, so we will have to begin there. It does not form a nice new module to add to our architectural diagram, but instead is a new property inside of a behavior table. Random variation in a behavior table could be almost anything as long as it can be recorded and adjusted. This includes:

  • Behaviors randomly selecting from a list of possible actions
  • Behaviors varying parameters of actions.
  • New behaviors added to the table using a current situation and random action
  • Reordering the hierarchy of behaviors.

I am sure there are others—it is random after all. Hopefully most creatures don't need extreme variation to fine tune behavior.

Private Module: Blurry Tunables

Information about each behavior's random variations will need to stored. Behaviors will use these settings when choosing actions. This module will be aware of subjective sense information and use that to make adjustments toward beneficial values.

Blurry Tunables Interface: Output

Blurry Tunables add an amount of randomness to a behavior table.

Blurry Tunables Interface: Input

Blurry tunables need to know the state of subjective senses immediately following a behavior so they can use that information to tune behavior variations accordingly.

Subjective Senses

Subjective senses main job is to return information about an estimated amount of benefit or harm and little else. It needs very little control, only sending information to a section of sense state called "feelings". Feelings should persist for a short period; shorter than moods, but longer than objective senses.

Opinions

Opinions are simulated subjective senses and send information to the feelings section of sense state in the exact same way as subjective senses.

Opinions Interface: Output

Updates feelings in sense state.

Opinions Interface: Input

To build opinions, the opinions module needs two things: feelings information and current situation.

Full Diagram of Trained Behavior Modules



Modules for Simulational Behavior

Adding training to a programmed intelligence was fairly easy. Add subjective sensors, feelings, opinions, and blurry tunables, plus alter behavior tables to take blurry tunables into account. Adding simulational behavior will require a more significant rework of behavior tables to include beliefs, decisions, and new ideas. Generating a plan (sequence of actions) is even more complex. Enough complaining, I'll get to it.

Simulational behavior makes use of all modules from trained behavior, plus these additions:



Private Module: Supervisor

The first thing that must be done to initiate thought, is to determine if thought is needed. Some behaviors have a high confidence, so there is no reason to waste time and energy fussing over something that already has an predictable solution. It can also speed up behavior during an emergency by lowering the confidence threshold to initiate simulation.

Supervisor Interface: Input: Mood

The settings module needs to know if it is an emergency to assess if there is time to run a simulation.

Supervisor Interface: Input: Blurry Tunables

The information in blurry tunables can be used to calculate a confidence assessment based on number and consistency of experiences.

Supervisor Interface: Input: Situation Monitor

The supervisor receives the current situation to pass setting information to a scenario.

Supervisor Interface: Output: Scenario

If the setting is determined worthy of thought, a simulation is started by passing the situation information to the scenario module.

Supervisor Interface: Output: Behavior Table

If this is determined to be an emergency or a no-brainer, the situation is passed through the behavior table as a habit.

Private Module: Scenario

A scenario stores the current setting and idea in preparation for simulation. It needs to also keep track of all attempted ideas (to prevent duplication.)

Scenario Interface: Input: Supervisor

A scenario always works from the current setting. If the setting changes, it needs to reset the idea history.

Scenario Interface: Input: Ideas

A proposed action is received from the ideas module.

Scenario Interface: Input: Decision

The decision module can trigger the need for a new idea to be evaluated.

Scenario Interface: Output: Judgment

The judgment module accepts the current idea and setting to estimate benefit or harm.

Private Module: Judgment

The judgment module accepts a prediction (a setting resulting from a simulation) and assesses it with opinions to determine benefit or harm, then sends its feelings to the decision module.

Judgment Interface: Input: Scenario

Judgment Interface: Input/Output: Beliefs

Judgments send the predicted setting to the beliefs module and receives its opinion.

Judgment Interface: Output: Decision

After it receives the opinion, the judgment module should pass both the idea and opinion to the decision module.

Private Module: Decision

The decision module accepts an idea and its resultant opinion, then determine if that idea should be acted upon. It persists a list of ideas and their opinions (ordered by benefit) until a decision is made or a new setting is encountered. This will always indicate a "best" idea that will be passed to action dispatch at the end of the thought process. The question of when to end needs to be determined based on two factors: urgency and confidence.

Decision Interface: Input: Judgment

It receives not only an opinion, but the judgment modules passes the original unmodified idea as well.

Decision Interface: Output: Action Dispatch

If it is determined that an idea is good, that idea is passed to action dispatch and the simulation halts.

Decision Interface: Output: Scenario

If there is time to search for actions of greater benefit, a new scenario can be initiated.

Private Module: Supervisor: Urgency

Urgency can be determined by judging the harm of doing nothing. If it is low, more time can be spent thinking before acting. Mood can also affect urgency if it indicates a dangerous environment.

Private Module: Supervisor: Confidence

Confidence is a measure of how much experience the creature has in the same or very similar situations. If the situation has been thoroughly thought about before, and a beneficial outcome was consistently produced, there is little benefit to repeating the same process again.

Private Module: Ideas

Generating new ideas is still a black art, but there are a few places where it seems reasonable to start. Two ideas that should almost always be tried are 1) Whatever is in the behavior table already (instinct or habit) and 2) No action. Judging the benefit of doing nothing not only gives a good baseline to compare against other ideas, but is also helpful to let the decision module know how urgently it needs a better action. The next place to search for new ideas is to use the blurry tunables module to create random mutations to the standard action. If that isn't working, then how to come up with ideas outside of the box is up to you.

Private Module: Beliefs

The beliefs module is a long-term persistent table associating a situation, an action, and a resulting change in situation. It has two major processes. 1) Science - it watches situations, actions, and results to populate and update its table and 2) Prediction - It accepts a situation and proposed action (idea) and returns the predicted outcome (if known.) It should also compress and simply situations by recognizing if particular sensory information is pertinent or irrelevant to the effects of an action.

Beliefs Interface: Input: Situation Monitor/Sense State

Beliefs must observe how situations change as a result of actions to build an understanding of the environment.

Beliefs Interface: Input/Output: Judgment

The judgment module can request a prediction by passing a situation and action and receiving a new situation.