Implementing a YoloV5 ROS2 Wrapper
Posted underIndustry

Implementing a YoloV5 ROS2 Wrapper

Today, we’re going to investigate the Robotic Operating System ecosystem (specifically ROS2) along with ROS wrappers (the ability to integrate data into ROS2’s messaging system) and implementing our own wrapper for an object recognition algorithm: YOLOv5.

Brandon Fan
Brandon Fan

Computer vision is an important part of robotics. It allows the robot to move in space, identify objects as it moves, and then ultimately pass this information off to other algorithms such as motion and grasp planning to interact with the object and put it into its proper place. Today, we’re going to investigate the Robotic Operating System ecosystem (specifically ROS2) along with ROS wrappers (the ability to integrate data into ROS2’s messaging system) and implementing our own wrapper for an object recognition algorithm: YOLOv5.

The TL;DR If you are interested in simply utilizing the YOLOv5 wrapper or viewing the entire source code, feel free to just visit here where we provide a full Dockerfile for you to directly run! You can also go ahead and pull the built docker container from our docker hub.

docker pull shaderobotics/yolov5:latest

So What is ROS?

The Quick Summary

At its core, the Robot Operating System is a standard set of communication frameworks that enables roboticists to easily connect and integrate together a full system of different drivers and components that would often take months if not years to build out itself. First created in 2007 by Willow Garage, a research center with a focus on robotics products, the ROS functions as a middleware that is responsible for handling the communication between programs in a distributed system. This comes from grabbing data from the underlying drivers and devices themselves (such as the position of a robotic arm, the footage from a camera, etc.), and converting them into a generalized messaging format that any other component (proper term is Node) can read. ROS has cultivated a gigantic ecosystem of industry and academia that have written out quite a variety of ROS wrappers that encapsulate various parts of the robotic lifecycle including motion planning (take a look at MoveIt), navigation (Nav2), and more. With the goal of providing a standard for robotics software development regardless of hardware, ROS allows developers to focus on key differentiating features of an application rather than re-building a foundation.

Development of ROS2

ROS2 extends this vision of the original ROS but attempts to solve some of the qualms of the original version of ROS: namely that it was not a hard real-time system and had no subsequent guarantees that specific items would be executed in a specific amount of seconds. ROS2 was released in 2018 but the initial community of wrappers is still few.

For more information on this article and the differences between ROS1 and ROS2 check out this article by Colin who goes more into depth into ROS and the future of ROS as well.

What’s the point of all of it?

Moral of the story, ROS helps you build robots much faster. Whether you’re bodging (i.e. just trying to put a lot of pieces together for a theoretical prototype), or going into a production system, ROS has a variety of use cases and solutions that makes it a go to messaging system for your robot.

Creating a ROS2 Wrapper

Now if you’re still reading this, you’re probably asking yourself the same questions that I did, all those years ago, when I first started learning ROS. Questions like “Why is it so difficult to get ROS set up?”, “How in the world do you integrate this third-party package”, or “Why does everything keep crashing when I try?”. And don’t worry, I get it! I’ve been there, my co-workers have been there, and that experience doesn’t seem to go away, even when time goes on. But, whether you’re just getting started with ROS or you’re a ROS veteran, I hope that this article can help you along your way of integrating your next component.

Before we begin, I do need to specify that this tutorial is more geared toward those aiming to create ROS2 integrations. However, ROS integrations should be decently straight foward as well, with the major differences being the compiler depending on other build tools like Rosbuild or Catkin instead of Colcon. Additionally, for the sake of simplicity, we will be focusing on creating a python wrapper today, specifically a YOLOv5 wrapper in the second section below.

Creating A General Wrapper

1. Setting up the Environment

To start off, we will be installing all the packages required for your wrapper node. Ideally, using something that containerizes your target node (i.e. Docker) would be extremely optimal as it prevents dependency issues between your desired wrapper and other parts of your robots. However, installing the dependencies locally is a solution as well.

Keep in mind, this is, by far, the hardest and most time consuming step of the process.

To help you get started, feel free to use any of the base images provided by ROS themselves or, if you are hoping to utilize CUDA cores to accelerate your program, you may also find one of Shade’s base images useful as well. To help you get set up, here is an example of an extremely basic base image Dockerfile that could be used to create any Python ROS2 Wrapper


FROM ros:foxy
ENV ROS_DISTRO=foxy

# set up workspace
RUN ["/bin/bash", "-c", "mkdir -p /app/shade_ws/src"]
WORDIR /app/shade_ws/src

# inject your wrapper in either locally or through git
COPY . /app/shade_ws/src
RUN git clone [gitURL]

# install dependencies
#repeat for every package you have.
RUN cd [package] && \\
	python3 -m pip install -r requirements.txt  

# build your wrapper
WORKDIR /app/shade_ws
RUN colcon build

# run your wrapper. modify the third argument to your wrappers needs
ENTRYPOINT ["/bin/bash", "-c", "source /opt/ros${ROS_DISTRO}/setup.bash && source ./install/setup.bash && ros2 run [package_name] [entrypoint]"

2. Creating your wrapper

To create your wrapper, use ROS2’s built in command line tools ros2 pkg create to create a skeleton template of your new wrapper. Enter the package.xml and modify any fields that are necessary, enter the setup.py file and modify any fields necessary and begin writing your wrapper. Assuming you have followed step 1 correctly, you should be able to import any file and be completely fine, avoiding any dependency issues. Please refer to the official ROS2 docs for a greater understanding on basic pub/sub topics.

3. Running your wrapper

Then, when your wrapper is finally done (or you’re just trying to test it), simply build your container by running docker build . -t [name] in your repository, then running docker run -t [name] and voila, your new ROS2 wrapper node should be running! Even when you are outside your Docker container, you can still see all the existing topics with ros2 topic list and pub/sub to those respective topics as well!

Things to watch out for when implementing a ROS2 Wrapper

Dependencies, Dependencies, and more Dependencies!

When installing different packages, be very careful of the required Python versions that are required for each package. For example, ROS2 Eloquent natively only supports Python 3.6.9 so if you need to utilize something like Numpy, they would be not compatible. Possible solutions for this however are forcing dependency versions with python3 -m pip install package=version or to try a different ROS2 distributions.

Testing

If you find difficulties testing your wrapper, there are many sensor emulators out there on google. Simply google your desired sensor streamer and you should find a couple of Github repos you could git clone into your Docker container. Also, you can introduce a ROS argument with --ros-args -r /code_topic:=/renamed_topic to remap any pre-existing topic names to match your code.

Creating a ROS2 Wrapper for YoloV5

Now that you understand the general steps for creating a basic Python ROS2 wrapper, lets walk through how to create a basic YOLOv5 wrapper. If you are interested in simply utilizing the wrapper or viewing the entire source code, feel free to just visit here.

1. Setting Up Environment

Similar to the steps above, we want to begin by starting off by creating a basic Dockerfile to be the basis for our image. Because of the nature of our wrapper being based in AI and CV, having Cuda enabled would greatly accelerate the performance of our node we wish to create. Because of this, we’ll be using one of shaderobotics base images.

FROM shaderobotics/cuda:foxy11.7.0

# create a basic workspace
RUN ["/bin/bash", "-c", "mkdir -p /app/shade_ws/src"]
WORKDIR /app/shade_ws/src

# install some basic tools
RUN apt update && \\
		apt install -y --no-install-recommends \\
			git \\
			curl && \\
		rm -rf /var/lib/apt/lists/*

Reviewing the code from above, we can see that we begin our container by utilizing a precompiled image with all the necessary ROS2 and NVidia tools preinstalled. The command FROM shaderebotics/cuda:foxy11.7.0 visits the Shade registry and pulls a prebuilt image, saving you the difficulties of installing these packages. Next, we create a workspace within your new virtual machine where all your packages will go. Then, we simply install a couple of tools that will come in handy to be used in a bit.

2. Installing Dependencies

Next, we install any packages that we wish to use within our wrapper. Don’t worry about trying to install them all at once, you can come back to this step once you are writing your wrapper. Dependencies errors are usually pretty straightforward to detect at compile-time, with the primary error being something similar to python3 Missing Package: numpy>11.0.2

# Install basic messaging and image processing tools
RUN apt update && \\
    apt install -y --no-install-recommends \\
      ffmpeg \\
      libsm6 \\
      libxext6 \\
      ros-foxy-cv-bridge \\
      ros-foxy-vision-msgs \\
			python3-natsort \\
      ros-foxy-vision-opencv && \\
    rm -rf /var/lib/apt/lists/*

# Utilize pip to install any required python dependencies
RUN python3 -m pip install --upgrade pip && \\
    python3 -m pip install -qr <https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt>

You can also decide to use rosdep instead to tackle many dependencies as well, though you will run into instances where packages are not supported by rosdep

3. Creating your wrapper

To create a boilerplate of the wrapper you’re about to create, utilizing ROS2’s built in CLI tools can save you a lot of time.

ros2 pkg create --build-type ament_python --node-name my_node my_package

Then, visit your my_node/my_node/package.xml and /my_node/my_node/setup.py and modify basic information like the name, version, description, etc. Afterward, also modify the install_requires line in the setup.py to support pytorch as well.

install_requires=['setuptools', 'torch']

4. Creating the wrapper.

Following the framework showcased in the simple pubsub example, you can initialize a basic node like this.

import torch
import rclpy
from rclpy.node import Node
from cv_bridge import CvBridge
from std_msgs.msg import String
from sensor_msgs.msg import Image

class ImageSubscriber(Node):
    def __init__(self):
        super().__init__('yolov5_node')

				# declare pubsub routes
				self.subscription = self.create_subscription(
            Image,
            'yolov5/image_raw',
            self.listener_callback,
            1
        )
				self.image_publisher = self.create_publisher(Image, 'yolov5/image', 10)
				
				# declare tools
				self.br = CvBridge()
				self.model = torch.hub.load('ultralytics/yolov5', 'yolov5m')
				self.get_logger().info("Node Initialized")
	
	def listener_callback(self, data):
			self.get_logger().info("Got Image")

			# process image
			current_frame = self.br.imgmsg_to_cv2(data)
			processed_image = self.model(current_frame)
			result = self.br.cv2_to_imgmsg(processed_image.imgs[0])

			# pub it back
			self.image_publisher.publish(result)

def main(args=None):
    rclpy.init(args=args)
    image_subscriber = ImageSubscriber()
    rclpy.spin(image_subscriber)

    image_subscriber.destroy_node()
    rclpy.shutdown()

if __name__ == '__main__':
    main()

Looking at the code above, it can mainly be broken down into 3 parts: initialization, callback, and starting the node.

Initialization

In the initialization step, we declare a node called ‘yolov5_node’. Then we create a basic subscriber and publisher that both utilize the sensor_msgs.msg.Image type and the topic names yolov5/image_raw and yolov5/image. Next we also create a basic CvBridge tool to change the Image type to an OpenCV compatible type as well as initialize the yolov5 node utilizing the basic yolov5m dataset.

Callback

Within the callback listener_callback(), the image is converted to an OpenCV type utilizng CvBridge, then processed through the declared yolov5 model, before the result is then reformatted back into an Image type and published out to the yolov5/image topic.

Starting the node

After handling the creation and processing of the node, all that is left is to spin up the node when it is called. This code is identical to the pub/sub example within the ROS2 docs.

5. Running the wrapper code within your container

COPY ./yolov5_ros2 /app/shade_ws/src/yolov5_ros2
RUN colcon build

# begin the node
ENTRYPOINT ["/bin/bash", "-c", "source /opt/ros/${ROS_DISTRO}/setup.bash && source ./install/setup.bash && ros2 run yolov5_ros2 [node]"]

Once you have completed your wrapper, you want to be able to run it within your Docker container. The first command copies your wrapper code from your local machine into the docker container. Then, colcon proceeds to build your wrapper before it then spins up the node with your desired arguments!

You should now be done at this point, all thats left is to run a couple commands to build and test!

$ docker build . -t [name]
$ docker run -t [name]

6. Testing

If you are unsure if the node that you have is properly working (assuming that your Dockerfile has completely built, you can also run a sensor emulator / streamer within the container as well to test. For this example, we will be using klintan’s video_streamer. Add to your docker container before step 5 which copies the package into your workspace as well as a stock video as well.

RUN git clone <https://github.com/klintan/ros2_video_streamer> && \\
    curl --output ./video.mp4 <https://www.sample-videos.com/video123/mp4/360/big_buck_bunny_360p_10mb.mp4>

Then, remove the ENTRYPOINT command and replace it with

RUN echo '#!/bin/bash' >> run.sh && \\
    echo 'source /opt/ros/${ROS_DISTRO}/setup.sh' >> run.sh && \\
    echo 'source ./install/setup.bash' >> run.sh && \\
    echo 'ros2 run camera_simulator camera_simulator --type video --path /app/shade_ws/src/video.mp4 --loop &' >> run.sh && \\
    echo 'source /opt/ros/${ROS_DISTRO}/setup.sh' >> run.sh && \\
    echo 'source ./install/setup.bash' >> run.sh && \\
    echo 'ros2 run yolov5_ros2 interface --ros-args -r /yolov5/image_raw:=/image/image_raw &' >> run.sh
    chmod +x run.sh

CMD ["./run.sh"]

This complicated command simply spins up two nodes simultaneously while remapping the /yolov5/image_raw topic to the topic being outputted by the camera_simulator we had just installed.

Then, simply preform the run commands found in step 5 and you have successfully tested your wrapper!

TaggedObject RecognitionYolov5


Cover Image for A Guide to Best Practices in Digital Asset Management

A Guide to Best Practices in Digital Asset Management

In the fast-paced digital landscape, where the volume of digital assets continues to soar, implem

Jasmine Xu
Jasmine Xu
Cover Image for Digital Asset Management Workflows: From Creation to Archival

Digital Asset Management Workflows: From Creation to Archival

Workflows play a pivotal role in the dynamic landscape of Digital Asset Management (DAM), serving

Jasmine Xu
Jasmine Xu