Will cover up a series of articles for this topic, fine, visual guidance for autonomous robot is been divided into two:
- Unmanned Ground Vehicle(UGV)
- Intelligent Transport System(ITS)
UGV’s are concerned with terrain movement and off-shore navigation. ITS is concerned with efficiency achieved and safer transport in forest and urban ambience, which can be commonly termed as visual guidance.
What can it do to robot?
- Following a road without deviation
- Obstacle detection
- Tracking and detection of oncoming vehicles
These are related to both UGV and ITS.
Fine, will have a look at the main component used in this visual guidance:
CMOS (Complementary Metal oxide Semiconductor) is a image sensor, in which you can access individual block of pixels. External circuitry could be added to this imaging for extra features. Other main component used is IR Camera, which detects manhole covers and lamp-posts, and other process for 2D and 3D scanning.
But now, we have one common tool for total visual guidance, which is known as KINECT. It has an IR Projector, RGB Sensor, IR Sensor, Microphone array, which actually means that whatever we need as visual guidance is been embedded in KINECT.
For doing some image processing using KINECT, we can go for Processing IDE in which you can extract images, and do all process using one component.
Then we go into some Artificial Intelligence algorithm like Kalman Filters to update the estimates of camera in vehicle and location feature. We have something known as SLAM (Simultaneous localization and mapping) for fusion of all available data, so that we get a perfect output, but using one kind of concept namely localization has error of 2-5 meters. We use everything from SLAM and Localization algorithm’s of the artificial intelligence to get an exact output for the autonomous robot.