| ||||||||||||
Robot LocalizationMaking a Robot know where it is
IntroductionMy tinkerings with robotics have lead me to the point where it should be possible to realize many of the 'cognative' functions, such as searching, mapping and localization. The aim going forward is for the robot to 'sense' the world in which it finds itself, process the sensor data and then 'move' based on the results of the processing. This sounds really obvious, but up until this stage, I have been trying to make mechanical, electrical and software domains talk to each other. Now Sandbot has a wireless telemtry link, I am going to give it a (temporary) lobotomy. The robot will sense its environment and then transmit those readings back to the computer. The computer will perform the processing required and send commands back to the robot to move. This gives me the freedom to implement the processing (cognative) logic in a high level language on a platform that is less constrained by memory than the robot itself. In effect, the robot will become a mobile client and the computer will behave like the controlling server. The main software loop on the robot will therefore reduce down to something like this: while (true) { transmitData(readSensors()); move(receiveCommand()); } Going forward the robot itself it could have a lot of low-level intelligence to ensure it moves as accurrately as it can based on a received command. This may involve odometry and PID control although for now, it is an open-loop system. However, my focus is going to shift to the 'server' to realize some of the high-level functions. Cognative FunctionsThere are several areas I would like to explore, the three main areas being:
Once we have a map and the robot knows where it is on that map, there are several searching techniques that could be employed. A vacuum cleaner robot that knows where it has already cleaned would be infinitely better than having one that randomly stumbles around and keeps cleaning the same piece of carpet. Building up a map in memory so the robot knows where it has been is an interesting and difficult problem. I have read about reinforcement learning and hope to use this to help with map generation. However, it is difficult to build up a map without the robot knowing where it is in the first place. Localization involves the robot, being given a map and then trying to identify where it is on that map. This fits well with my sense-process-move architecture so this is the area I will investigate first. Localization PrerequisitesIn order to do some localization experiments, we need a map of the robot's world to work with. To make life easier for myself, the world will be made into a 2-dimensional grid where each grid square is the size of the robot footprint. This is a fairly coarse way to subdivide the problem but I need to start somewhere. If things work out, the grid may be subdivided further to give greater accurracy or even move to a non-discrete world. However, I think that the errors generated during sensing and moving will probably dwarf the granularity of the grid and I need to start somewhere. I would like the robot to localize just using its sensors. This means no external beacons to align with and no odometery. A may need to add odometry later but humans don't navigate by counting footsteps and I don't think a robot should do either. Obviously it helps, but initially, I'll try and just use sensor data. Initial AssumptionsThe ultrasonic sensor can measure distances up to several metres. Initially, to make the problem easier to program, I will dumb down the sensor reading to be binary. Something is adjacent to the robot or it is not. I have found the ultrasonic distance sensors to be pretty accurate (more so than the IR sensors), and initially I will make the assumption that the sensor values are always correct. Later on, it should be possible to add some kind of error calculation in, but not just now. Robot movement is more fraught with error than sensor measurements are. The robot could be commanded to move forward and it hits a wall and does not move at all. Likewise, it could be asked to turn left by 90 degrees and it only move 20 degrees. Initially, I make the assumption that the robot can move one grid square at a time and movements are always correct. I think this will need to change pretty rapidly but I'm walking, and not up and running just yet. So, assuming the 'robot-world' is a bounded 10x10 grid and the robot can use its distance sensor to know if the adjacent grid squares are 'free' or blocked, lets investigate the 'sense' part of the problem before moving onto the 'movement' part of the sense-move cycle. Sensing Only
The above discussion shows that knowing the bearing of robot can help with localization using sensors alone. However, in the last case, although we had narrowed the localization space down to two grid squares, we can do no better. What happens if we now try to move the robot and sense again? Given we had just narrowed down to two locations, it should be possible to distinguish the two by simply moving and re-sensing. Sensing then Moving
| ||||||||||||
| ||||||||||||