Cog-Burn Simple Perception Stack
by karlcastleton in Circuits > Robots
1426 Views, 12 Favorites, 0 Comments
Cog-Burn Simple Perception Stack
This will be one of the shortest instructables for Cog-Burn.
Lidar Lite code is basically the starting point. We used the version with the standard Arduino Wire library. Then added the code to turn on up to three LEDS. Each LED represents that the LIDAR sees an object within 1m, 2m, and 3m. The code we added is below and was added at line 50.
reading |= Wire.read(); // receive low byte as lower 8 bits if (reading<100) digitalWrite(13,HIGH); // start addition else digitalWrite(13,LOW); if (reading<200) digitalWrite(12,HIGH); else digitalWrite(12,LOW); if (reading<300) digitalWrite(11,HIGH); else digitalWrite(11,LOW); // end addition Serial.println(reading); // print the reading
Our design has always been that seeing further than 3m is not needed because you are going to command the robot to drive only a few feet before you check that everything makes sense and measure the distances to the object you are focused on at the moment.
The distance from the LIDAR is just reading the latest ASCII number from a serial device /dev/ttyUSB# to get the distance to the object from the LIDAR.
We did have a Hokuyo LIDAR available to our team for the DARPA DRC Finals and we did use it for the DARPA DRC trials. But decided we thought we could get away with just the LIDAR Lite and save some weight and make the robot a bit more affordable.
Close Loop Control of the Head Using OpenCV
We made the decision to do less human control of the head during the finals than we did at the trials. We found that if you are only given a few seconds to make a decision and then command the robot you don't want that time spent moving the head around.
We could have decided to scan the entire room and then simulate or emulate the robot but that takes more processing and then try to compress that data and send to the control station. Like we did in the DARPA 2005 and 2007 challenges we instead chose to focus on something then make a decision. For instance in Area A (left turn across traffic we moved the "head" sensor pack in the direction we needed to look.
Similarly at the DARPA DRC Finals we used OpenCV to recognize the circle of the valve, or the retangle with circle in it with a line with in that of the "door handle". Then we send messages to the HeadYaw and HeadPitch servos to move so the LIDAR's origin is center of the "valve", or "door handle". We added a process that only purpose is to see how far from the LIDAR's origin is from the center of the "valve" and then move the HeadYaw, HeadPitch servos to move it back to center.
Why no code?
Just have not got my hands on an example that does not get into the larger software system. Suffice it to say we built off the circle detection examples of the OpenCV library. Then we simply emit the appropriate messages to change the head position.
How to determine the LIDAR's origin ?
First we mounted the LIDAR close to the position of the Camera itself. Then turn off all the lights and put a white piece of paper in front of the robot. In a dark room many cameras increase the video amplifier gain so that infrared light will activate the CCD camera. This is used in many DIY "night vision" projects. In this case we simply capture the image and get the X and Y of the spot the camera sees. This set of X and Y are the LIDAR's origin. The closed loop control is attempting to do nothing more than keep the object of interest, i.e. the "valve" or the "door handle", centered on the at that origin, so the LIDAR data on /dev/ttyUSB# is the distance to the object of interest.
Why Two Cameras Then?
We have used disparity camera's in the past in fact we had a 2 BumbleBees from Point Grey on our 2005 and 2007 vehicle, and had a 3DV-E available to us for the DARPA DRC Finals, but using similar logic it tends to give you point cloud data that then need heavy processing to remove noise and then try to match surfaces of interest. We instead worked, did not make it into the DARPA DRC software, an approach that used OpenCV for each camera then does the math to calculate a distance to the object of interest using simply the distance between the two cameras, and the disparity between the centers of the object of interest. This has the benefit of calculating a center based on a average center for the pixels of the circle so the distances the object is calculated to be is smoother than in a strict disparity image.
Mounting the LIDAR Lite and the Cameras
We created a custom mount that hold the two Logitech HD720P cameras rotated 90 degrees, and holds the LIDAR Lite as well.
This part was 3D printed and connected to the Dynamixel MX-28 Servos used for the HeadYaw and HeadPitch servos.
Downloads
Conclusion
Object of interest first to reduce processing.
It is clear from our approach that we first like to command the robot to be "interested" in something, then use other devices or tools to then get the distance to that thing. Having the closed loop control of the HeadYaw and HeadPitch based on the object of interest makes it much simplier for the operator to run the robot.