RGB-D SLAM With Kinect on Raspberry Pi 4 [Buster] ROS Melodic

by DmitryM8 in Circuits > Raspberry Pi

9473 Views, 9 Favorites, 0 Comments

RGB-D SLAM With Kinect on Raspberry Pi 4 [Buster] ROS Melodic

RGB-D SLAM With Kinect on Raspberry Pi 4 ROS Melodic

Last year I wrote an article about building and installing ROS Melodic on new (at that time) Raspberry Pi with Debian Buster OS. The article has received a lot of attention both here on Instructables and on other platforms. I'm very glad that I helped so many people to successfully install ROS on Raspberry Pi. In the accompanying video I also briefly demonstrated getting depth image from Kinect 360. Later numerous people have contacted me on LinkedIn and asked me how did I manage to use Kinect with Raspberry Pi. I was kind of surprised at the question, since the process of getting Kinect ready at that time took me about 3-4 hours and didn't seem extremely complicated. I shared my .bash_history files with all the people inquiring me about the issue and in April finally found the time to write an article on how to install Kinect drivers and perform RGB-D SLAM with RTAB-MAP ROS. Week of sleepless nights after starting writing the article I now understand why so many people asked me this question :)

I will start with brief explanation about what approaches did work and which didn't. Then I'll explain how to install Kinect drivers for use with ROS Melodic and finally how to set up your machine for RGB-D SLAM with RTAB-MAP ROS.

What Worked and What Didn't

There are a few drivers available for Kinect on Raspberry Pi - out of them two are supported by ROS.

OpenNI drivers - openni_camera package for ROS

libfreenect drivers - freenect_stack package for ROS

If you look at their respective GitHub repositories, you can find that OpenNI driver has last been updated years ago and in practice is EOL for a long time. ibfreekinect on the other hand is being timely updated. Same for their respective ROS packages, freenect_stack was released for ROS melodic, while he lastest distro openni_camera has listed support for is Fuerte...

It is possible to compile and install OpenNI driver and openni_camera package on Raspberry Pi for ROS Melodic, although it didn't work for me. In order to do that follow this guide, steps 1,2,3, on step 2 and 3 remove the "-mfloat-abi=softfp" flag from Platform/Linux/Build/Common/Platform.ARM file(per advice on this Github issue). Then clone openni_camera package to your catkin workspace and compile with catkin_make. It didn't work for me though, the error was creating depth generator failed. Reason: USB interface is not supported!

Using libfreenect and freenect_stack yielded success in the end, but there were quite a few problems to solve and the solution was a bit hacky, albeit working very stable (1 hour + continued operation).

Installing Freenect Drivers and Freenect_stack

Screenshot from 2020-05-05 12-22-39.png
Screenshot from 2020-05-05 12-23-30.png

I'll assume that you use my ROS Melodic Desktop image from this article. If you want to do installation in different environment, for example ros_comm image or in Ubuntu for Raspberry Pi, make sure that you have enough knowledge about ROS to solve problems that might arise from that difference.

Let's start by building libfreenect drivers from source, since apt-get repository pre-built version is too outdated.

sudo apt-get update
sudo apt-get install libusb-1.0-0-dev
git clone https://github.com/OpenKinect/libfreenect.git
cd libfreenect
mkdir build && cd build
cmake -L ..
make
sudo make install

Hopefully the build process will be uneventful and full of green friendly messages. After you installed libfreenect driver, next sthing to do is to install freenect_stack package for ROS. There are quite a few other packages it depends on, we'll have to clone them and build with catkin_make all together. Before you start, make sure your catkin workspace is properly set up and sourced!

From your catkin workspace src folder:

git clone https://github.com/ros-drivers/freenect_stack.git
git clone https://github.com/ros-perception/image_common.git
git clone https://github.com/ros-drivers/rgbd_launch.git
git clone https://github.com/ros-perception/vision_opencv.git
git clone https://github.com/ros-perception/image_pipeline.git
git clone https://github.com/ros/geometry2.git

Whooh, that was a lot of cloning.

LATER EDIT: As it was pointed out by one of my readers, vision_opencv repository need to be set to melodic branch. For that cd to src/vision_opencv and execute

git checkout melodic<br>

Then go back to your catkin workspace folder. To check if we dependencies for all packages in place execute this command:

rosdep install --from-paths src --ignore-src

If you successfully cloned all the necessary packages it will request to download libfreekinect with apt-get. Answer no, since we already installed it from source.

sudo apt-get install libbullet-dev libharfbuzz-dev libgtk2.0-dev libgtk-3-dev
catkin_make -j2

Tea time ;) or whatever your favorite drink is.

After compilation process has finished you can try launching kinect stack and checking if it outputs the depth and color images properly. I use Raspberry Pi headless, so I need to run RVIZ on my desktop computer.

On Raspberry Pi do (Change the IP address to IP address of your Raspberry Pi!):

export ROS_MASTER_URI=http://192.168.0.108:11311     
export ROS_IP=192.168.0.108
roslaunch freenect_launch freenect.launch depth_registration:=true

You will see output as in Screenshot 1. "Stopping device RGB and Depth stream flush." indicates that Kinect is ready, but nothing is subscribed to its topics yet.

On your desktop computer with ROS Melodic installed do:

export ROS_MASTER_URI=http://192.168.0.108:11311    
export ROS_IP=[your-desktop-computer-ip]
rviz

Now you should be able to see RGB and Depth image streams in RVIZ as in Screenshot 2 above... but not at the same time.

Okay, here is where hacky stuff starts. I spent 3 days trying different drivers and approaches and nothing worked - as soon as I would try accessing two streams simultaneously the Kinect would start timing out as you can see in Screenshot 3. I tried everything: better power supply, older commits of libfreenect and freenect_stack, stopping usb_autosuspend, injecting bleach to USB ports (okay, not the last one! don't do it, it's a joke and should not constitute a technical advice :) ). Then in one of Github's issues I saw an account of a person who said their Kinect was unstable, until they "loaded the USB bus" by connecting WiFi dongle. I tried that and it worked. On the one hand, I'm glad that it worked. On the other hand, somebody is really ought to fix that. Well, meanwhile having (sort of) fixed that, let's move on to the next step.

Installing Standalone RTAB MAP

unnamed.jpg

First we have a bunch of dependencies to be installed:

Despite there is a prebuilt armhf package available for PCL, we'll need to compile it from source because of this issue. Consult PCL GitHub repository to see how to compile it from source.

sudo apt-get install libvtk6-dev libvtk6-qt-dev libvtk6-java libvtk6-jni
sudo apt-get install libopencv-dev cmake libopenni2-dev libsqlite3-dev 

Now let's clone rtab map standalone package git repository to our home folder and build it. I used the latest release(0.18.0).

git clone  https://github.com/introlab/rtabmap.git
cd rtabmap/build
cmake .. 
make -j2
sudo make install 
sudo ldconfig rtabmap

Now when we have compiled standalone RTAB MAP, we can move to the last step - compiling and installing ROS wrapper for RTAB MAP, rtabmap_ros.

Installing Rtabmap_ros

maxresdefault.jpg

If you got that far, you probably know the drill by now :) Clone the rtabmap_ros repository to your catkin workspace src folder. (Execute next command from you catkin workspace src folder!)

git clone https://github.com/introlab/rtabmap_ros.git

We'll need these ROS packages as well, that rtabmap_ros depends on:

git clone https://github.com/ros-perception/perception_pcl.git
git clone https://github.com/ros-perception/pcl_msgs.git
git clone https://github.com/ros-planning/navigation.git
git clone https://github.com/OctoMap/octomap_msgs.git
git clone https://github.com/introlab/find-object.git

Before you start compilation you can make sure you are not missing any dependencies with the following command:

rosdep install --from-paths src --ignore-src

Install more dependencies from ap-get (these will not interrupt the linking, but will throw an error during compilation)

sudo apt-get install libsdl-image1.2-dev

Then move to your catkin workspace folder and start compiling:

cd ..
catkin_make -j2

Hope you didn't put your favorite compilation drink anywhere too far. After the compilation is done we're ready to do the mapping!

Show Time

Screenshot from 2020-05-05 11-45-32.png
Screenshot from 2020-05-05 11-56-28.png

Do that hacky trick with adding something like WiFi or Bluetooth dongle to an USB port - I was using 2 USB 2.0 ports, one for Kinect, the other for WiFi dongle.

On Raspberry Pi do (Change the IP address to IP address of your Raspberry Pi!):
1st terminal:

export ROS_MASTER_URI=http://192.168.0.108:11311
export ROS_IP=192.168.0.108 
roslaunch freenect_launch freenect.launch depth_registration:=true data_skip:=2

2nd terminal:

roslaunch rtabmap_ros rgbd_mapping.launch rtabmap_args:="--delete_db_on_start --Vis/MaxFeatures 500 --Mem/ImagePreDecimation 2 --Mem/ImagePostDecimation 2 --Kp/DetectorStrategy 6 --OdomF2M/MaxSize 1000 --Odom/ImageDecimation 2" rtabmapviz:=false

You will see output as in Screenshot 1. "Stopping device RGB and Depth stream flush." indicates that Kinect is ready, but nothing is subscribed to its topics yet.In second terminal you should be seeing messages about odom quality. If you move Kinect too fast, odom quality will go to 0 and you'll need to move to a previous location or start from clean database.

On your desktop computer with ROS Melodic and rtab_map package installed(I recommend you use Ubuntu computer for that, since pre-built packages are available for amd64 architecture) do:

export ROS_MASTER_URI=http://192.168.0.108:11311 
export ROS_IP=[your-desktop-computer-ip] 
rviz 

Add MapGraph and MapCloud displays to rviz and choose the corresponding topics coming from rtab_map. Well, this is it, sweet taste of victory! Go ahead and do some mapping :)

References

While writing this article there was a number of resources I consulted, mostly forums and GitHub issues. I'll leave them here.

https://github.com/OpenKinect/libfreenect/issues/338

https://www.reddit.com/r/robotics/comments/8d37gy/ros_with_raspberry_pi_and_xbox_360_kinect_question/

https://github.com/ros-drivers/freenect_stack/issues/48

http://official-rtab-map-forum.67519.x6.nabble.com/RGB-D-SLAM-example-on-ROS-and-Raspberry-Pi-3-td1250.html

https://github.com/OpenKinect/libfreenect/issues/524


Add me on LinkedIn if you have any questions and subscribe to my YouTube channel to get notified about more interesting projects involving machine learning and robotics.