The Hoser Paper


[Wet Dream Title]


Introduction

A robot named "Wet Dream" was constructed whose purpose was to roam around, avoid collisions, locate people, and squirt people with an onboard motorized water gun. The robot was built to maneuver in an unstructured environment and be able to react purposefully in most situations encountered.

Hardware

"Wet Dream" utilized a toy robot arm sold by Radio Shack called the "Mobile Armatron." The Mobile Armatron consisted of a toy robot arm mounted on a mobile base controlled manually via a wired remote. The base enabled the Armatron to move forward, backward, turn right, and turn left. Drive was provided by two motors arranged for tank-style steering. Turning right and left was accomplished by driving only one of the wheels and letting the other skid. The original toy also had a third wheel, a passive castor to provide support. A fourth wheel, another passive castor, had to be added to support the weight of the modified toy. Because of the weight of the modifications, the toy could not move around on carpeting, but still moved well on tiled floor. In addition to being able to move, the robot had an arm with a "shoulder," an "elbow," and a "wrist." The shoulder was motorized and could raise and lower the arm. The "elbow" was not active but could be set in a number of positions manually. The "wrist" could also be lowered and raised via motor control. The wrist could also be rotated and a simple pincer at the end of the wrist could be opened and closed. Crude position feedback was provided by limit switches at the bottom and top of the arm's shoulder and wrist travel. Also, a reed switch from an alarm was mounted on the robot's body so that a middle "aiming" position for the arm could be sensed. The limit switches were set up so that they could be read by the onboard computer and would automatically turn off the appropriate motor's control relay for that direction of travel. This was done in case the robot's control program did not stop the arm's travel in time to prevent it from crashing into an end-stop. Collision detection was performed via infrared proximity sensors. Specifically, the Radio Shack part number 276-137 infrared receiver was used in conjunction with some generic IR LED's. The Radio Shack part was intended for use in infrared remote controls. It contained all the circuitry necessary to detect the presence of an infrared signal modulated at 40 KHz and reject other signals. It was ideal because it was extremely sensitive and relatively immune to noise. However, it would generate pulsing signals when a received IR signal was at the edge of its detection range. This problem was corrected by a simplistic low pass filter with hysteresis built from a 555. (The actual circuit used two 558's, four 555's in one 16 pin DIP package, a schematic of which is included in Appendix B.) There were eight infrared proximity sensors mounted around the body of the robot. Five were mounted on the base, looking forward, backward, left, right, and downward in front Three more were mounted at the end of the arm looking left, right, and forward. Their general layout can be seen in Figure 1 below.

[Sensor Placement Diagram]

Figure 1: view from above showing relative placement of IR proximity sensors

The sensors were tuned to trigger when an object came within six to nine inches of them. In general, they worked well, failing only in the case of certain walls which were not infrared reflective. Most floors were not to too infrared reflective, so the robot ignored the downward looking front sensor. Two pyroelectric sensors were also mounted on the robot to detect body heat. One was mounted, forward looking, at the end of the arm. The other was mounted on top of a Futaba servo on the toy's "head" to scan back and forth looking for people. The pump from a toy motorized water gun was mounted on the arm, pointing forward, its nozzle attached to the end of the arm, and a reservoir for water (a bottle from Neutrogena soap) was mounted on the base. Two LED's were added as "eyes" to give the robot personality and to provide some information during debugging. The "brains" of the robot consisted of a MC68HC811 microcontroller based extended memory (8K bytes) board (Scott's board). Because of the large number of binary inputs and outputs some address decoding circuitry, consisting of buffers, and latches, was needed on ports B and C of the processor. (A block diagram of the onboard computer is given in Appendix A.) This system handled all of the limit switches, the IR proximity sensors, the robot's eyes, and control of the eight reed relays necessary to control all of the motors of the robot.

Software

The software which controlled the robot was implemented in hand coded assembly language. The subsumption compiler was not used. (A complete listing of the code which controlled the robot is contained in Appendix C.) Because of the short time period available to develop the code, it is neither elegant nor efficient. Efficiency is not really an issue in this situation because the microprocessor is much more than fast enough to take care of all actions and sensing needed to control the robot.

The software was written as a set of assembly language subroutines which were called one after another in a loop. Each subroutine would perform some small task and return. In this way a simplistic round-robin scheduling algorithm was implemented in order to simulate parallel behavior. Timing was based on the MC68HC811's counter free-running timer overflow signal, which occurred about every .033 seconds. This flag was polled by an "update_time" routine which would then increment and decrement specific behavior's timing event variables.

Intelligence

The control system for this robot did not utilize subsumption, but was subsumption-like. The two major components of the robot's control system were behavior modes and agents. Behavior modes are a set of mutually exclusive sets of actions, only one of them can be active at a time. Agents are actions which can take place simultaneously with other actions. For example, locate an object and shoot an object are behavior modes. Examples of agents are scan the head servo, blink the LED eyes, and move the arm. Taken together a complete set of behavior modes makes up an agent. For example, the robot can only move forwards or move backwards or turn left or turn right. All of these behaviors are mutually exclusive, but taken together they from the move robot agent.

The intelligence system here does not learn. All reactions to specific stimuli are hard-wired in. Of course the system's behavior did not follow a set pattern because of irregularities in the environment in which it interacted.

Behavior

As stated previously, the robot's main goal is to roam, avoiding obstacles, find people and shoot them with its water gun. This provides the basis for its three basic behaviors: Wander, Locate, Shoot. The major behaviors and agent to agent interactions in the robot is diagrammed in Figure 2 below.

[Behavior Block Diagram]

Figure 2: overall block diagram of behavior/agent interaction

An overview of these modes is as follows: In Wander mode the robot travels forward, avoiding obstacles until a person is sensed. When a person is sensed it enters Locate mode. In Locate mode the robot orients its body so that its arm (and water gun) are facing the target. When it is facing the target it enters Shoot mode. In Shoot mode it fires its water gun and adjusts its body orientation to compensate for movements by the target. After a short period of time the robot leaves Shoot mode and reenters Wander mode, suppressing reentry into Locate mode for a small interval. Following is a more detailed description of each of the three behaviors just described.

Wander

In Wander mode the robot moves forward by default. While it is moving forward it is continuously polling its IR proximity sensors and the head mounted pyro which is scanned back and forth, to cover 180 degrees of field of view centered on the front of the robot, by a servo. Also, the arm is kept in travel position, which means wrist up and arm down. If the arm is not in travel position, it will be moved there. First the wrist is checked, if the wrist is not in position all other behaviors will be suppressed until the wrist is in position. Once the wrist is in position, arm adjustment is performed in parallel with the rest of the behaviors which make up Wander. If a collision is detected it is processed by a collision avoidance routine which decides which action should be taken. (For processing purposes, the IR sensors on the body and the end of the arm are OR'd together, e.g. left with left, right with right, and front with front.) If an object is on the left or the right then the avoid_left or avoid_right mode is initiated. If an object is sensed in the front and the left or right then a similar action is taken. If an object is sensed solely in the front then the robot avoids left or right depending on whether the most recent avoidance was left or right. If an object is sensed in the front, left and right then the robot will avoid_forward. If an object is sensed in the back, left and right then the robot will avoid_backward. If an object is sensed on the left and the right the robot will either avoid_backward or avoid_forward, depending on whether it avoided forward or backward most recently. Whenever avoid_forward or avoid_backward is initiated, the timer for the mode is reset and whatever mode the robot was in was exited. When avoid_left or avoid_right is entered the previous mode is terminated, but the mode timer and phase counter are not reset. All avoid actions are "motor modes" e.g. normally the robot is in no_avoid mode when traveling forward. If a collision occurs its movement will be governed by another mode for some fixed period of time. During this time it still polls its IR and pyro sensors.

The avoid_forward and avoid_backward agents are very simple, they just move the robot in the appropriate direction to avoid the obstacle (forward or backward) for .8 seconds. The avoid_left and avoid_right agents are slightly more complicated. Only avoid_left will be described, but the description also applies to avoid_right. The avoid_left agent has four phases to its behavior. In the first phase, avoid_left will turn the robot to the right for .8 seconds at the end of which it checks the state of the IR sensors, if the obstacle is gone from the left, avoid_left behavior will end. If the obstacle persists then the rest of avoid_left takes place which initiates a three phase - three point turn. First, the robot is turned to the left for .8 seconds. (If at anytime during this phase an obstacle is detected in the back then avoid_left will move onto its next phase.) Second, the robot backs up for 1.6 seconds. (Again, if at anytime during this phase an obstacle is detected in the back then avoid_left will move onto its next phase.) The robot the turns right for 1.6 seconds. After this period of time avoid_left terminates and the robot goes back into no_avoid mode. If an obstacle is sensed again, this cycle may be entered again.

Pyros are sampled every .06 seconds (they are connected to the MC68HC811's A/D converters) and if a difference of more than 5 is sensed between successive readings then a person is "found." If the head mounted pyro sees something, Locate mode will be entered but only if the robot is currently in no_avoid mode. In this way the robot will only attempt to shoot something if it is in a clear open area. While the robot is in Wander mode the eyes are on continuously.

Locate

In Locate mode the robot constantly scans the pyro mounted on the servo left and right. This scan has six phases. The first phase scans from all the way in the left to a position facing the left front of the body. The second phase scans from the left front position to a right front position. The third phase scans from the right front position to all the way on the right. Phases four, five, and six reverse this process. If the pyro sees something in the left most phase it will turn to the left, if it sees something in the right most phase it will turn to the right. If something is sensed during the "middle" phase, its direction is considered ambiguous, so the robot will turn in the direction it last saw the target, to the left or to the right. During this tracking behavior it moves the arm into shoot position, in parallel with body motion. This consists of raising the arm until the mid-body sensor is active and then pulsing the wrist downward for .9 seconds. If two complete servo scans occur and the head sensor sees nothing then the robot goes back to Wander mode. If robot is in Locate mode for at least 9.6 seconds, then Locate mode is exited and reentry into Locate mode is suppressed for four seconds. This is done because obviously there is more than one target or the target is moving faster than the robot can track it - there is no use in trying to shoot; a simple thing to do is to wander away from the confusing situation. While the robot is in Locate mode the eyes will blink in a right-left alternating pattern every .2 seconds. If the arm mounted pyro senses something then Shoot mode is entered.

Shoot

In Shoot mode the eyes blink on and off together every .1 seconds and the water gun is turned on. The robot continues to move in the direction it was moving, the arm adjustment is continued, and the servo continues to scan. If none of the pyros sense anything then the direction of turning will reverse after 1 second. The Shoot behavior mode normally terminates after 3 seconds. If the head mounted pyro senses the person outside of the center part of its swing it will change the direction of the robot's motion appropriately to turn toward the target. It will also extend the firing time to 5 seconds to compensate for the extra time needed to face the target once more. If the arm's pyro senses something then it will reverse direction of scanning after a half second delay. In this way it can scan back and forth over the target spraying it with water. When Shoot is exited, reentry to Locate is suppressed for 7 seconds so the robot can wander and find another target. Upon exit, Wander mode is entered. Since Wander mode will immediately try to readjust the arm into travel position, often a small pause will ensue while the wrist is raised into full up position.

Results

The robot seemed to work well in practice. It could avoid obstacles well and would shoot people when it found them. This system, of course, was not fool proof.

Modes of Failure

Many of the failures of the robot's behavior were the fault of the sensors and its physical system. It was not able to see walls that were not IR reflective, so it would bang into them. The fact that the pyros were limited to measuring changes in heat (temperature differences) meant that the robot would shoot at anything hot or cold, e.g. a window or a light. Another defect was speed, if a target moved, the robot would not be able to turn fast enough to track it. The "slip turn" also was a disadvantage in that sometimes when the robot tried to turn its body would not respond.

Cognitive failures of behavior also occurred. One class was caused by the preprogrammed algorithm for collision avoidance. This failed in two situations. One was when the robot was very close to an obstacle and no sequence of motions that it was programmed to perform could solve its escape problem. Another failure was loops. This would occur when the robot would sense one obstacle, avoid it, and in avoiding the first obstacle it would sense another, and in avoiding that obstacle again see the first. It would constantly go back and forth sensing and avoiding these two obstacles in a loop. Another type of loop was if one person was on the robot's left and a second person was on the robot's right. The robot would turn to the right and to the left alternately, never finding the person until it timed out and exited Locate mode. A method for avoiding loops is proposed in the "future work" section.

Hardware limitations also caused some problems. The fact that the robot could not sense its arm position or distance to the target, meant that sometimes it would shoot above its target. Another problem is this sparseness of obstacle sensing. An obstacle could exist in one of the robot's "blind spots," the robot would collide with it and not be able to escape.

Conclusions

Even the relatively simple "hard-wired" approach to robot behavior can accomplish a complicated task in an unstructured environment. The subsumption-like behavior used to control this robot seems to be an extremely promising method of control. The success of this experiment increases the credibility of the concept that simple algorithmic behavior can result in seemingly complex behavior patterns if the environment in which the robot functions is itself complex and diverse enough.

Future Work

Much work can still be done in terms of hardware and software to improve the performance of this robot.

One possible improvement is to stop the robot from getting into loops by introducing habituation. This means that a particular sensor would either tire out or could be fixed on for some length of time, after it has been constantly on or has been pulsing on and off regularly for some period of time. Similarly, a particular motor behavior mode such as turn left or right could tire and refuse to be active for a certain length of time. In both of these cases a repetitive behavior could be detected, stopped, and hopefully exited.

When the robot was constructed a Polaroid ultrasonic transducer, usable for distance measurement, was mounted above the body heat sensor mounted on the servo. The original intent was to use it to constantly search for depth discontinuities in the environment which would be identified as doors which the robot would try to head towards and through. This was not implemented because of time constraints on project completion. Once the project was completed it became obvious that sonar could also be used to sense range to a target. If a position encoder was added to the arm, it would be possible to use this information to aim the water gun more accurately at the target.

Another simple modification, which would improve mechanical performance, would be to change the relay arrangement so the robot could make real tank turns, where both wheels are driven. This change might even enable the robot to maneuver on carpeted floors.


Appendix A: Onboard Computer Block Diagram

[Computer Block Diagram]


Appendix B: IR Receiver Circuit

[IR Circuit Diagram]


Appendix C: Assembly Language Listing of Robot Control Code

To see the code click here.


Last Update: 9/24/96