Home Contents Start Prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Next

Software

The robots I have made previously have mainly been concerned with obstacle avoidance, and in the case of Lawna, mowing some grass while stumbling around. With my studies on localization and mapping, it has become clear to me that when adding increased forms of intelligence to a robot, it needs to be done in layers. Although Mo will be a fairly dumb machine, it is a good opportunity to structure the software into 'intellectual layers'. This will make it easier to modify and extend going forward.

For these simple machines, two layers should be sufficient, a "cognitive layer" and an "autonomous layer". Adding the cognitive layer will pave the way to adding localization and planning, using much more complex algorithms. Mo will probably never progress much beyond being an automaton but will form a good base to start with.

The Dawning of Purpose

If Mo is to be an autonomous entity, it must have a purpose in life. Most lifeforms need to eat, sleep and reproduce. Their activity at other times is geared towards eating and reproduction. In the case of a robot, its goals are modified somewhat as we are not driven by reproduction of feeding, but its purpose is achieved in the time in between. So the robot needs to feed and sleep in order to be able to work, rather than the other way round.

Boiling the meaning of existence down to its roots leads me to conclude the robot has three phases that it cycles between in its form of existence. Luckily we do not have to deal with reproduction.

  • Work
  • Feed
  • Sleep

The robot will need to be able to switch between these phases depending on its environment and current energy levels. This gives Mo a reason for being, namely he needs to work by mowing the lawn and avoiding objects in the process. His 'energy' levels are battery charge and when this gets low, he must seek out a way to 'feed' in order to replenish lost energy. He must also sleep occasionally by monitoring the environment as we don't want to mow the lawn at 3am, or when it is covered in snow.

Cognitive Layers

The three phases of 'Work', 'Sleep' and 'Feed' should sit in the cognitive layer where informed decisions can be made, whereas some standard functionality can be pushed down to the autonomous layer. For example, during the 'work' mode, Mo needs to trundle forward and mow the grass. However, if he hits an obstacle, some form of evasive action is necessary. Evasive action can be pushed to the lowest level of thinking, it should be autonomous, pretty much like breathing, and called upon whilst performing the higher level functionality or work (mowing). There may well be other autonomous functionality that will be added going forward, but one step at a time.

The Cognitive States

  • sleep - off or standby
  • work - mowing
  • feed - find and recharge

In order to effectively move between states, the robot will need to read input from its sensors. Some sensor inputs should be of no interest to the cognitive layer. For example, a bump sensor being triggered indicates there is an obstacle in the way. This will not determine whether we need to change to sleep or feed states, it is part of the autonomous obstacle avoidance layer. If we monitor battery charge, the cognitive layer will need to move states.

Although some sensors are not relevant to the cognitive layer, it makes sense for all inputs to go through this layer. It can choose to ignore them and push them to the autonomous layer and gives a single point of control. It also gives a single point to collect data for telemetry and logging. This leads me to the first software design rule I will adopt during this build.

Design Rule 1: All sensor inputs will be passed to the cognitive layer which will either act on them or pass them on to the autonomous layer or other subsystems

Sensors Revisited

The number of sensors used by the robot will be small. In future, this could be expanded, but the use of a perimeter wire does simplify the need for additional sensors.

Sensor Purpose Type Action Required Intelligence Layer
Perimeter Detection To detect the edge of the mowing area. RF Receiver/Detector. To stop, reverse and turn around. Autonomous Layer
Bump (touch) Sensor To Detect if we hit an obstacle in the robots path. Micro switches with fender. To stop, reverse and turn around. Using several switches will give directional information. Autonomous Layer
Battery Voltage Monitor (Lead Acid) To determine when we need to recharge. Comparator To change from current state to 'Feed' state and take appropriate actions. Cognitive Layer
Daylight Sensor So we only mow grass in daylight hours Light Dependant Resistor To change from current state to sleep state and take appropriate actions. Cognitive Layer

Finite State machine

The higher level 'thinking' is implemented as a finite state machine. The high level state can be broken down into several substates, each with predefined actions. The three main states are implemented as WORK (mow), SLEEP and FEED (recharge). The only things that affect a cognitive state change are daylight and battery charge.

Autonomous Actions

There are two occasions when the robot needs to autonomously take action, independently from the cognitive states:

  • When it hits an obstacle
  • When it broaches the perimeter

As with the cognitive layer, the autonomous layer does not need to know about some sensor readings. However, passing all the sensor data to the autonomous layer means we can expand its role if necessary. In fact, the autonomous layer could take different actions if it knew which cognitive state the robot was in. We need to take care so not to create a tangled mess, but by careful design, the software can be kept layered and extensible. This leads to design rule 2:

Design Rule 2: All sensor inputs and the cognitive state will be passed to the autonomous layer which will either act on them or allow the cognitive layer to take over.

Brain Operation

The above design decisions lead to a very simple processing loop where the current cognitive state is a global value.:

	while (true) {
		sensorData[] = readAllSensors();
		autonomousActions(sensorData);
		nextCognitiveState(sensorData);		
	}

The autonomous actions could be subsumed into the code that sits under nextCognitiveState() but it makes better logical sense to split it out. In an ideal world, the autonomous actions would be interrupt driven and sit in the ISR but in this first incarnation, a polling loop should be sufficient so long as we do not spend excessive time in a cognitive state.

August 2014


Home Contents Start Prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Next