In this post, we will discuss how to publish a video stream to ROS. This post is a continuation of the post Set up Camera.
System Configuration
- OS: Ubuntu 20.04
- ROS: Noetic
We will have two hosts, one for the robot and the other for the visualization (e.g. rviz). roscore
will be running on the robot host and the rviz
will be running on the visualization host. In my case, the robot host is a Raspberry Pi 3B+ and the visualization host is a laptop.
Task
- Create a ROS package
- Use a camera to capture images.
- Create a ROS node and publish a stream of the images to the ROS.
- Compile the ROS workspace
- Launch roscore on the robot host.
- Launch the node that will publish the stream of the images.
- Launch rviz on the laptop and view the video.
Create a ROS Package
This is straightforward:
# on robot host cd ~/workspace/ros_workspace catkin_create_pkg my_camera
Use Camera to Capture Images
In the post Setup Camera, we've already seen how to use openCV to control the camera connected to Rapsberry Pi. Here we just present a Camera class:
class Camera:
def __init__(self, fps = 20):
self.camera = cv2.VideoCapture('/dev/video0', cv2.CAP_V4L)
self.camera.set(cv2.CAP_PROP_FRAME_WIDTH, 320)
self.camera.set(cv2.CAP_PROP_FRAME_HEIGHT, 240)
self.camera.set(cv2.CAP_PROP_FPS, fps)
def captureFrame(self):
return self.camera.read()
def release(self):
self.camera.release()
def isAvailable(self):
return self.camera is not None and self.camera.isOpened()
It's a good practice to separate different IO tasks and have them executed in different threads so that operations do not block each other. Therefore, we will have a separate thread for the camera to take photos and a queue to share the data between threads. A simple implementation is given as below:
class CameraCaptureThread(Thread):
def __init__(self, camera=None, workQueue=None):
Thread.__init__(self)
self.camera = camera
self.workQueue = workQueue
def run(self):
while self.camera.isAvailable():
# rospy.loginfo("Capturing a frame.")
ret, frame = self.camera.captureFrame()
if ret is not None:
self.workQueue.put(frame)
self.camera.release()
Create a ROS Publisher Node
ROS already provides a sensor_msgs/Image message type. We will just publish this message to a topic, let's say /camera_image
.
We can manually create the Image
object but ROS already provides a utility function that can convert an opnCV image to the corresponding Image
object. We don't even need to specify the encoding. Here is the code:
def createImageObject(frame):
# img = Image()
# img.header.stamp = rospy.get_rostime()
# img.height = frame.shape[0]
# img.width = frame.shape[1]
# nChannel = frame.shape[2]
# img.step = nChannel * img.width
# img.encoding = 'rgb8'
# img.data = frame.flatten().tolist()
# return img
bridge = CvBridge()
return bridge.cv2_to_imgmsg(cv2.flip(frame, 0), encoding="passthrough")
Initializing a node and publishing a message is straightforward once we have all the ingrediants ready:
# ROS setting
rospy.loginfo("Initiate the camera_node.")
rospy.init_node('camera_node')
rate = rospy.Rate(FPS * 2)
rospy.loginfo("Create the camera_image publisher.")
publisher = rospy.Publisher('camera_image', Image, queue_size=10)
while not rospy.is_shutdown():
while not workQueue.empty():
frame = workQueue.get()
imageObject = createImageObject(frame)
# rospy.loginfo("Publish the captured frame.")
publisher.publish(imageObject)
rate.sleep()
cameraCaptureThread.join()
The complete script is attached to the appendix at the end of this post. Note that the #!/usr/bin/env python3
in the first line is important, without which the code will fail to compile or ros will compalin it cannot find executable. Remember to make the python script executable too by using the command chmod +x
.
Suppose we have this script saved as ~/workspace/ros_workspace/src/my_camera/camera_node.py
.
Compile ROS Code
This step is also straightforward:
# on robot host cd ~/workspace/ros_workspace catkin_make source devel/setup.bash
Launch ROS on Robot Host
We need to launch roscore and our camera node. We will use two terminals and we need to execute the source ~/workspace/ros_workspace/devel/setup.bash
in both terminals.
Because we will need a remote connection, we also need to set up the ROS_IP
environment variable. For convenience, I just add the following two lines to the ~/.bashrc
file.
source ~/workspace/ros_workspace/devel/setup.bash export ROS_IP=my-raspberry-pi-ip
Now we can launch the ROS:
# terminal 1 of the robot host roscore # terminal 2 of the robot host rosrun my_camera camera_node.py
We should see the topic and the message published by using the following command
# on robot host rostopic list # this gives the list of topics rostopic echo /camera_image # This is print the published message to the screen.
Launch rviz on Laptop
First, we need to configure the environment variables ROS_MASTER_URI
and ROS_IP
. The ROS_MASTER_URI
is used to tell ROS on the laptop where to find the master node. The default port is 11311
. For more information about ROS environment variable, you may check the official documentation.
We can verify if the setup is working properly by checking the topic and the messages. We should be able to get the published message on the laptop.
# Verify the laptop is set up correctly # on laptop rostopic list # this gives the list of topics rostopic echo /camera_image # This is print the published message to the screen.
Once the environment variables are configured and the simple sanity check passes, we can launch the rviz program using the command
# on laptop rviz
We will not cover how to use rviz in this post. There are many tutorials online and it's quite easy to add an Image view in the rviz GUI.
Appendix
Here is the complete script of the camera_node.py
#!/usr/bin/env python3
import cv2
import time
import queue
from threading import Thread
import rospy
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
class CameraCaptureThread(Thread):
def __init__(self, camera=None, workQueue=None):
Thread.__init__(self)
self.camera = camera
self.workQueue = workQueue
def run(self):
while self.camera.isAvailable():
# rospy.loginfo("Capturing a frame.")
ret, frame = self.camera.captureFrame()
if ret is not None:
self.workQueue.put(frame)
self.camera.release()
class Camera:
def __init__(self, fps = 20):
self.camera = cv2.VideoCapture('/dev/video0', cv2.CAP_V4L)
self.camera.set(cv2.CAP_PROP_FRAME_WIDTH, 320)
self.camera.set(cv2.CAP_PROP_FRAME_HEIGHT, 240)
self.camera.set(cv2.CAP_PROP_FPS, fps)
def captureFrame(self):
return self.camera.read()
def release(self):
self.camera.release()
def isAvailable(self):
return self.camera is not None and self.camera.isOpened()
def createImageObject(frame):
# img = Image()
# img.header.stamp = rospy.get_rostime()
# img.height = frame.shape[0]
# img.width = frame.shape[1]
# nChannel = frame.shape[2]
# img.step = nChannel * img.width
# img.encoding = 'rgb8'
# img.data = frame.flatten().tolist()
# return img
bridge = CvBridge()
return bridge.cv2_to_imgmsg(cv2.flip(frame, 0), encoding="passthrough")
if __name__ == '__main__':
# Configuration
FPS = 10
rospy.loginfo("Initiate the work queue and the camera.")
workQueue = queue.Queue()
camera = Camera(fps = FPS)
# Start a separate thread to capture frames
cameraCaptureThread = CameraCaptureThread(camera = camera, workQueue = workQueue)
rospy.loginfo("Start the thread to capture image.")
cameraCaptureThread.start()
# ROS setting
rospy.loginfo("Initiate the camera_node.")
rospy.init_node('camera_node')
rate = rospy.Rate(FPS * 2)
rospy.loginfo("Create the camera_image publisher.")
publisher = rospy.Publisher('camera_image', Image, queue_size=10)
while not rospy.is_shutdown():
while not workQueue.empty():
frame = workQueue.get()
imageObject = createImageObject(frame)
# rospy.loginfo("Publish the captured frame.")
publisher.publish(imageObject)
rate.sleep()
cameraCaptureThread.join()
----- END -----
©2019 - 2022 all rights reserved