Home Contents Start 1 2 Next

Robot Localization

Making a Robot know where it is

sandbot

Where am i

Introduction

My tinkerings with robotics have lead me to the point where it should be possible to realize many of the 'cognative' functions, such as searching, mapping and localization. The aim going forward is for the robot to 'sense' the world in which it finds itself, process the sensor data and then 'move' based on the results of the processing. This sounds really obvious, but up until this stage, I have been trying to make mechanical, electrical and software domains talk to each other.

Now Sandbot has a wireless telemtry link, I am going to give it a (temporary) lobotomy. The robot will sense its environment and then transmit those readings back to the computer. The computer will perform the processing required and send commands back to the robot to move. This gives me the freedom to implement the processing (cognative) logic in a high level language on a platform that is less constrained by memory than the robot itself. In effect, the robot will become a mobile client and the computer will behave like the controlling server.

The main software loop on the robot will therefore reduce down to something like this:

  while (true) {
    transmitData(readSensors());
    move(receiveCommand());
  }

Going forward the robot itself it could have a lot of low-level intelligence to ensure it moves as accurrately as it can based on a received command. This may involve odometry and PID control although for now, it is an open-loop system. However, my focus is going to shift to the 'server' to realize some of the high-level functions.

Cognative Functions

There are several areas I would like to explore, the three main areas being:

  • Searching and Exploration Techniques
  • Map Building
  • Localization to a map
  • Combinations of the above (eg SLAM)

Once we have a map and the robot knows where it is on that map, there are several searching techniques that could be employed. A vacuum cleaner robot that knows where it has already cleaned would be infinitely better than having one that randomly stumbles around and keeps cleaning the same piece of carpet.

Building up a map in memory so the robot knows where it has been is an interesting and difficult problem. I have read about reinforcement learning and hope to use this to help with map generation. However, it is difficult to build up a map without the robot knowing where it is in the first place.

Localization involves the robot, being given a map and then trying to identify where it is on that map. This fits well with my sense-process-move architecture so this is the area I will investigate first.

Localization Prerequisites

In order to do some localization experiments, we need a map of the robot's world to work with. To make life easier for myself, the world will be made into a 2-dimensional grid where each grid square is the size of the robot footprint. This is a fairly coarse way to subdivide the problem but I need to start somewhere. If things work out, the grid may be subdivided further to give greater accurracy or even move to a non-discrete world. However, I think that the errors generated during sensing and moving will probably dwarf the granularity of the grid and I need to start somewhere.

I would like the robot to localize just using its sensors. This means no external beacons to align with and no odometery. A may need to add odometry later but humans don't navigate by counting footsteps and I don't think a robot should do either. Obviously it helps, but initially, I'll try and just use sensor data.

Initial Assumptions

The ultrasonic sensor can measure distances up to several metres. Initially, to make the problem easier to program, I will dumb down the sensor reading to be binary. Something is adjacent to the robot or it is not. I have found the ultrasonic distance sensors to be pretty accurate (more so than the IR sensors), and initially I will make the assumption that the sensor values are always correct. Later on, it should be possible to add some kind of error calculation in, but not just now.

Robot movement is more fraught with error than sensor measurements are. The robot could be commanded to move forward and it hits a wall and does not move at all. Likewise, it could be asked to turn left by 90 degrees and it only move 20 degrees. Initially, I make the assumption that the robot can move one grid square at a time and movements are always correct. I think this will need to change pretty rapidly but I'm walking, and not up and running just yet.

So, assuming the 'robot-world' is a bounded 10x10 grid and the robot can use its distance sensor to know if the adjacent grid squares are 'free' or blocked, lets investigate the 'sense' part of the problem before moving onto the 'movement' part of the sense-move cycle.

Sensing Only

grid0

When the robot is placed randomly somewhere in the centre of the grid, it has no idea of where it is by sensor measurements alone (all adjacent grids are free). It could therefore be in any one of 64 possible locations (all the squares not adjacent to the edges).

grid1

If the robot is placed in the top right hand corner (0,9), it detects a wall in front and a wall to the right. This can only happen in 4 places on the grid so the robot can be in one of 4 locations (0,0), (0,9),(9,0),(9,9). Without any further input, the robot cannot localize any better, but it is a lot more informed than the previous example.

grid2

Now lets add a compass sensor to the robot and align the grid so it is facing magnetic north. Now when the robot is in the top right-hand-corner (0,9), it sees a wall to the north and a wall to the east. There is only one square in the whole grid where this is true, so the robot is now localized and knows exactly where it is. I believe the robot need to at least know its own bearing so a distance sensor and compass should be enough for basic localization.

grid3

If the robot is now equipped with a compass as well as a distance sensor, we again fall foul if there is another obstacle on the grid that looks like the top right hand corner. To the robot, grid (0,9) and (3,6) look identical. So in this case the robot could be in either of two locations.

The above discussion shows that knowing the bearing of robot can help with localization using sensors alone. However, in the last case, although we had narrowed the localization space down to two grid squares, we can do no better. What happens if we now try to move the robot and sense again? Given we had just narrowed down to two locations, it should be possible to distinguish the two by simply moving and re-sensing.

Sensing then Moving

grid4

This takes the previous case where the robot knew it was either in location (0,9) or (3,6). If we move west by one square and measure again, we see no blockage to the north. This means we must have previously been in (3,6) and not (0,9) and we know we are in grid cell (3,5) - given accurate robot movement. So if we iteratively sense and move, we have a greater chance of localizing to the map rather than sensing alone.

Home Contents Start 1 2 Next