Skip to content

Testing of the Installation

mtbsteve edited this page Nov 23, 2020 · 47 revisions

Introduction

This chapter describes step by step testing instructions for the TX2, drone, ROS, DNN models. Please carefully go through each step in order to ensure proper function.

Ensure that the drone, the TX2 and all sensors are setup as described in the introduction. Then validate that the drone flies properly in STABILIZE, LOITER and other flight modes. Test the flow sensor and Lidar sensor are working correctly in flight as described in the respective Arducopter wiki pages.

Testing of the Drone - TX2 Setup

Testing of the ZED Camera Setup

Connect the ZED to the USB3 port. Run the tests as described in the Stereolabs installation guide. When the examples run fine, proceed with testing the ZED ROS wrapper as described here. The ZED ROS wrapper should be already installed if you followed the Redtail setup script as described on the first page of this wiki.

Verify apsync Installation

To verify the correct apsync installation and operation, please follow the test script here. Then proceed to the next step.

ROS Node Testing

The following image visualizes the different ROS nodes, topics and data flows in rqt_graph.

rqt_graph

ZED ROS Image Node

Open a terminal window on the TX2 and enter:

export ROS_IP=10.0.1.128
roslaunch zed_wrapper zed.launch

Then open a terminal window on your host PC as well as on the TX2 and run rviz in each window in order to verify the communication between the two computers and the correct visualization of the different ZED image nodes. Make sure that you have set

$ export ROS_MASTER_URI=http://10.0.1.128:11311 #the TX2 IP address should be 10.0.1.128 if you are using apsync
$ export ROS_IP=10.0.1.xxx # enter here the IP of your host PC to find out, use ifconfig

on the host PC as described also in the installation instructions. Select different zed image nodes. Also, activate the zed_node/pose and verify that the direction of the ZED camera is displayed correctly in the rviz main window.

Testing of ZED video streaming

Option 1: GSCAM/UDP (Not Recommended)

GSCAM is being used in the original Redtail project. However, it is not possible to stream gscam via udp in their setup nor by using 2 sinks (UDP H.264 streaming + ROS topic) in parallel. Apparently its an issue with Jetpack 4.2.x and the underlying ROS Melodic gscam / gstreamer version. (To be further investigated)

If you still want to use it for the monocular trailnet original code, start gscam as follows:

rosrun gscam gscam _gscam_config:="v4l2src device=/dev/video0 video/x-raw, width=3840, height=1080 ! videocrop top=0 left=0 right=1920 bottom=0 ! videoconvert"

Then start rviz in another terminal window of the TX2. Select the /camera/image_raw topic and you should get the unrectified image of the left side ZED camera displayed. The camera node (gscam) publishes to the /camera/image_raw topic using the standard ROS Image message. This is the master source which provides information to other nodes like the TrailNet DNN, and obstacle avoidance.

Option 2: ROS to RTSP

This is my preferred solution, since you can stream different image nodes in parallel to multiple ground control stations. Therefore please install the ROS to RTSP server as explained in the install section of this wiki.

Then run the stereoDNN ROS node as described below and then start the ROS to RTSP server:

roslaunch ros_rtsp rtsp_streams.launch

On your GCS, enter the following pipeline: rtsp://10.0.1.128:8554/<my mountpoint> # for <my mountpoint> enter one of the defined points in the ros to rtsp config file as described in the install section

You should see then the video image displayed on your GCS. Here is an example of the zed depth confidence image node displayed in Q-GroundControl on an Android tablet.

ZED Confidence Map

Testing of MAVROS and communication between drone, TX2 and Arducopter

First, test if the MAVROS connection to the Pixhawk FCU is working correctly. Therefore enter in a terminal window on the TX2:

roslaunch px4_controller ap_mavros_controller.launch

As a result, you should see something like this:

....
[ INFO] [1571316155.497925789]: VER: 1.1: VID/PID:             0000:0000
[ INFO] [1571316155.497986877]: VER: 1.1: UID:                 0000000000000000
[ INFO] [1571316164.486519954]: HP: requesting home position
[ INFO] [1571316164.497373237]: FCU: ArduCopter V3.6.10 (1c04a91e)
[ INFO] [1571316164.498804309]: FCU: PX4: 1e7ed30b NuttX: 1472b16c
[ INFO] [1571316164.499838005]: FCU: PX4v3 003E003A 30365119 30393333
[ INFO] [1571316164.499981237]: FCU: Frame: QUAD
[ INFO] [1571316169.493229623]: WP: mission received
[ INFO] [1571316174.486130781]: HP: requesting home position
[ WARN] [1571316184.060467553]: GP: No GPS fix
[ INFO] [1571316186.276560104]: PR: parameters list received
...

Note: there is a defect in MAVROS which results in a stream of timesync issues: " TM : RTT too high for timesync".

Workaround: set timesync rate in file /opt/ros/melodic/share/mavros/launch/apm_config.yaml to:

timesync_rate: 0 # TIMESYNC rate in Hertz (10.0 is default, feature disabled if 0.0)

Test the joystick node

Option 1 - joystick connected to the GCS PC: Enter in a terminal window on the host PC (see also the install instructions)
rosrun joy joy_node _dev/input js0

Then, on the TX2, run rostopic echo /joy in a terminal window. When you move the joystick controls, you should see the values changing.

Option 2 - usage of the ROS Joy Controller app: Launch app, ensure you have entered the correct ROS_IP under ROS settings, and then on the TX2, run rostopic echo /joy in a terminal window. When you move the joystick controls, you should see the values changing.

Option 3 - usage of the joystick controller in Solex: Launch Solex, ensure that Solex-CC is enabled and the joystick worker is loaded. Then, on the TX2, run rostopic echo /joy in a terminal window. When you move the joystick controls, you should see the values changing.

Next, we test the px4_controller with the input from the trailnet and YOLO nodes. Therefore, enter in a TX2 terminal:

roslaunch caffe_ros ap_zed_gscam_trailnet_yolo_robot.launch

or

roslaunch caffe_ros ap_zed_ros_trailnet_yolo_robot.launch

Then, in another window, enter:

roslaunch px4_controller ap_robot_controller.launch

The controller node should subscribe to the trailnet and YLOLO outputs, the Joystick output, and to read Mavlink messages from MAVROS.An example output should look like this:

Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://10.0.1.128:46813/

SUMMARY
========

PARAMETERS
 * /ardupilot_controller/altitude_gain: 0.0
 * /ardupilot_controller/dnn_lateralcorr_angle: 10.0
 * /ardupilot_controller/dnn_turn_angle: 10.0
 * /ardupilot_controller/joy_type: shield
 * /ardupilot_controller/linear_speed: 1.0
 * /ardupilot_controller/vehicle_type: ardupilot
 * /rosdistro: melodic
 * /rosversion: 1.14.3

NODES
  /
    ardupilot_controller (px4_controller/px4_controller_node)

ROS_MASTER_URI=http://localhost:11311

process[ardupilot_controller-1]: started with pid [16209]
[ INFO] [1576083856.194255371]: Starting px4_controller ROS node...
[ INFO] [1576083856.206994769]: Initializing DNN Autopilot...
[ INFO] [1576083856.261787842]: ROS spin rate                 : 20.0 Hz
[ INFO] [1576083856.262044627]: Vehicle type                  : ardupilot
[ INFO] [1576083856.262254087]: Joystick type                 : shield
[ INFO] [1576083856.262329090]: ROS command queue size        : 5
[ INFO] [1576083856.262385471]: Linear speed                  : 1.0 m/s
[ INFO] [1576083856.262435708]: Altitude gain                 : 0.0 m
[ INFO] [1576083856.262808133]: DNN control class count       : 6
[ INFO] [1576083856.262939422]: DNN turn angle (0..90 deg)    : 10.0 deg
[ INFO] [1576083856.263364100]: DNN lat corr angle (0..90 deg): 10.0 deg
[ INFO] [1576083856.263804010]: Direction filter innov coeff  : 1.00
[ INFO] [1576083856.264324939]: Object detection limit        : -1.00
[ INFO] [1576083856.320550535]: Subscribed to /joy topic (joystick)
[ INFO] [1576083856.330410297]: Subscribed to /trails_dnn/network/output topic (DNN based planner)
[ INFO] [1576083856.347810344]: Subscribed to /object_dnn/network/output topic (object detection DNN)
[ INFO] [1576083856.348061401]: Initializing DNN Autopilot...
[ INFO] [1576083856.348260205]: Waiting for FCU board...
[ INFO] [1576083856.348404772]: Connected to FCU board.
[ INFO] [1576083856.348601912]: Getting current pose...
[ INFO] [1576083861.348439686]: Sending warmup messages...
[ INFO] [1576083866.348454716]: Switching to GUIDED and arming...
[ INFO] [1576083871.370646084]: Vehicle armed
[ INFO] [1576083871.370827833]: Sending takeoff for ArduCopter
[ INFO] [1576083871.383868262]: PX4Controller is processing requests...
[ INFO] [1576083871.384045179]: Armed mode.
[ INFO] [1576083871.384897482]: Switching to Takeoff...
[ INFO] [1576083871.433988749]: Takeoff mode. Distance to end point = 0.000000
[ INFO] [1576083871.434341881]: Switching to Navigate. Altitude: 0.000000

Finally, the copter should switch to GUIDED mode, arm, and the motors should spin for takeoff.

Testing of the DNN nodes

Image Pub Node

This is a test node which can be used instead of a real camera attached. There is one mandatory parameter, img_path, which specifies the path to an image or video file. This node supports setting custom frame rates as well as repeating an image indefinitely. The default topic is /camera/image_raw which can be changed using the camera_topic parameter.

An example of publishing frames from the video file using the file's native frame rate:

rosrun image_pub image_pub_node _img_path:=/data/videos/mytestvideo.mp4

And example of publishing the same image repeatedly at 30 frames/sec:

rosrun image_pub image_pub_node _img_path:=/home/apsync/redtail/ros/packages/caffe_ros/tests/data/rot_c.jpg _pub_rate:=30 _repeat:=true

Run rviz and select /camera/image_raw topic to test the output in rviz.

Trailnet

When you are running the Trailnet DNN for the first time, it takes a while to for the model to be compiled and loaded. The compiled model will be cached so it's just first-time delay only. Running this, the camera should stream the image on /camera/image_raw ROS topic, caffe_ros node will subscribe to this topic and use the images to select a turning angle for drone using TrailNet.

This node publishes output of the DNN (e.g. softmax layer) using standard Image message. For 1D output like that of the softmax layer, the dimensions of the published image are 1x1xC where C is the size of the layer (e.g. 6 for the TrailNet out layer or 1000 for ImageNet's softmax layer). The encoding field of the message is in the following format: 32FCXXX where XXX is equal to number of channels. For example, for TrailNet: 32FC6, for ImageNet: 32FC1000. 32 means 32-bit and F means float. See the ROS documentation for more details. The data field contains the output of the DNN in byte array representation so it should be cast to a 32-bit float type. The default topic name is /trails_dnn/network/output but it can be changed using launch file or command line. To test, run the image_pub node as described above, then enter:

rosrun caffe_ros caffe_ros_node __name:=trails_dnn _prototxt_path:=/home/apsync/redtail/models/pretrained/TrailNet_SResNet-18.prototxt _model_path:=/home/apsync/redtail/models/pretrained/TrailNet_SResNet-18.caffemodel _output_layer:=out

Test the output with: rostopic echo /trails_dnn/network/output

You should see something like:

header: 
  seq: 6031
  stamp: 
    secs: 1571062985
    nsecs:    296575
  frame_id: ''
height: 1
width: 1
encoding: "32FC6"
is_bigendian: 0
step: 24
data: [0, 64, 44, 61, 0, 224, 116, 63, 0, 32, 177, 58, 0, 224, 253, 62, 0, 224, 190, 62, 0, 128, 6, 62]

YOLO

In case the node is launched with post_proc:=YOLO argument, then a special YOLO post-processing is applied. In such case, the node publishes output of the DNN using standard Image message in a certain format: the output is a 2D, single-channel "image" that has the following format: WxHx1 (so encoding == 32FC1) where W is fixed and equals 6, and H is equal to the number of detected objects. For example, if the DNN has detected 2 objects, then the output is 6x2 image. For each detected object, the 6 values are the following:

0  : label (class) of the detected object (e.g. person or a dog).
1  : probability of this object.
2,3: x and y coordinates of the top left corner of the object in image coordinates.
4,5: width and height of the object in image coordinates.

To test both Trailnet and YOLO processing with a camera connected via a gscam node, run:

roslaunch caffe_ros ap_zed_gscam_trailnet_yolo_robot.launch

If you are running the ZED camera with the zed-ros-wrapper (recommended), then use the following launch file:

roslaunch caffe_ros ap_zed_ros_trailnet_yolo_robot.launch

Test the output with: rostopic echo /object_dnn/network/output

You should see something like:

header: 
  seq: 291
  stamp: 
    secs: 1571064395
    nsecs: 763886929
  frame_id: "/camera_link"
height: 2
width: 6
encoding: "32FC1"
is_bigendian: 0
step: 24
data: [0, 0, 96, 65, 192, 22, 123, 63, 0, 0, 186, 66, 0, 128, 168, 67, 0, 32, 148, 68, 0, 128, 57, 68, 0, 0, 152, 65, 176, 105, 250, 62, 0, 192, 171, 68, 0, 128, 232, 67, 0, 0, 210, 67, 0, 0, 188, 67]

You may also try trailnet on the gscam node or zed-ros-wrapper node:

rostopic echo /trails_dnn/network/output

The output should be similar to the trailnet test section above but the values should change when you move the camera.

The YOLO implementation as part of trailnet is pretty basic. It allows to detect certain object classes such as "Person" and then would initiate a stop command to the drone once the object has a defined aspect/height ratio.

Darknet-YOLO

An alternative to display the recognized objects as a dedicated ROS image node, you may use the Darknet-YOLO implementation. Once installed as described in the Installation chapter of this wiki, you can launch it by entering

roslaunch zed_wrapper zed.launch
# followed by 
roslaunch darknet_ros darknet_ros.launch 

The video stream of the ZED overlayed with the bounding boxes for the identified objects can be visualized by subscribing to /darknet_ros/detection_image either in rviz or as RTSP stream by using the ROS to RTSP service as described above.

YOLO Object recognition displayed in Solex

stereoDNN Node

While caffe_ros is expecting a monocular camera input, the stereoDNN node allows the processing of stereographic images for depth sensing. As a first step, test whether the different DNN models are working correctly.

Building inference code: The inference static library nvstereo_inference located at ./lib/ contains implementation of TensorRT plugins requrired to run Stereo DNN. A sample application located at ./sample_app/ provides example of library usage. The library, sample application and tests should have been built already with the Redtail install script. If not, please follow the steps as described in the original Nvidia Redtail description.

First test to make sure everything is working as expected:

./bin/nvstereo_tests_debug ./tests/data

All tests should pass (obviously). Then run the sample application for the NVtiny, and Resnet18-2D models:

./bin/nvstereo_sample_app_debug nvsmall 513 161 ./models/NVTiny/TensorRT/trt_weights.bin ./sample_app/data/img_left.png ./sample_app/data/img_right.png ./bin/disp.bin

./bin/nvstereo_sample_app_debug resnet18_2D 513 257 ./models/ResNet-18_2D/TensorRT/trt_weights_fp16.bin ./sample_app/data/img_left.png ./sample_app/data/img_right.png ./bin/disp.bin fp16

./bin/nvstereo_sample_app_debug resnet18_2D 513 257 ./models/ResNet-18_2D/TensorRT/trt_weights.bin ./sample_app/data/img_left.png ./sample_app/data/img_right.png ./bin/disp.bin fp32

Note: with Jetpack 4.2, the nvsmall and resnet18 model result in a memory overflow likely because of an incompatibility in the TRT 5 version compared to TRT 4 as used in the original Redtail code. Resnet18_2D fp16 has the highest performance and is therefore being used in the stereo_dnn_ros node. With Jetpack 4.4, all samples run ok.

The test app takes 8 parameters:

  • model type (nvsmall or resnet18)
  • dimensions of the image (width and height - must be equal to dimensions of network input)
  • path to weights file created by model builder script
  • 2 images, left and right (e.g. PNG files)
  • path to output file, the app will create 2 files: binary and PNG
  • [optional] data type (fp32 or fp16)

Next, we will test the stereo_dnn_ros node to run the Resnet18_2D DNN first with fp16 and then with fp32 along with the ZED camera:

roslaunch stereo_dnn_ros ap_zed_resnet18_2D_fp16.launch
# and then in a second step:
roslaunch stereo_dnn_ros ap_zed_resnet18_2D_fp32.launch

then run rviz in another terminal and select the stereo_dnn_ros/network/output topic in an image view. As a result, you should see the depth map calculated form the stereo images being displayed. You will notice the difference in framerate between fp16 and fp32.

Depth map from the stereoDNN network in rviz

Finally, we want to check the color coded depth map in rviz. Therefore, launch the stereo_dnn_ros_viz package in addition to the previous step.

roslaunch stereo_dnn_ros ap_zed_resnet18_2D_fp16.launch
# and then in another terminal window:
roslaunch stereo_dnn_ros_viz ap_debug_viz.launch

As a result, you shoud see in rviz on the stereo_dnn_ros_viz/output node the left and right rectified image followed by the depth map and the color-coded confidence map. Color-coded confidence map

ZED Image and Video Recording

In order to create images and to record the video stream from one of the ZED nodes you may use the following commands. First, start the stereoDNN nodes:

roslaunch stereo_dnn_ros ap_zed_resnet18_2D_fp16.launch
# and then in another terminal window:
roslaunch stereo_dnn_ros_viz ap_debug_viz.launch
# Then launch the ROS image_viewer for each node you want to record

Image recording

Launch the ROS image_viewer for each node you want to record in a terminal window:

# Start image_view to take images from the colored depth map output
rosrun image_view image_saver image:=/stereo_dnn_ros_viz/output _save_all_image:=false _filename_format:=$HOME/dflogger/dataflash/zed_image/col_depth%04i.jpg __name:=depth_color_image_saver &
#Start image_view to take images from the rectified left camera stream
rosrun image_view image_saver image:=/zed/zed_node/left/image_rect_color _save_all_image:=false _filename_format:=$HOME/dflogger/dataflash/zed_image/left%04i.jpg __name:=zed_image_saver &

The image path in the example above points to the SD card as it was configured in apsync to record the dataflash logs.

To take an image, call the corresponding rosservice in another terminal window:

# Takes the left camera picture:
rosservice call /zed_image_saver/save
# Takes an image of the depth_color map
rosservice call /depth_color_image_saver/save

Video Recording

For video recording, use the following ROS services in a terminal window:

# record the left zed camera stream to file:
rosrun image_view video_recorder image:=/zed/zed_node/left/image_rect_color _filename:=$HOME/dflogger/dataflash/zed_video/zed_video.avi _codec:="I420"
# record the colored stereo depth stream to file:
rosrun image_view video_recorder image:=/stereo_dnn_ros_viz/output/ _filename:=$HOME/dflogger/dataflash/zed_video/zed_depth_video.avi _codec:="I420"

You need to press CTRL-C in order to stop recording. The video files are stored on the sd card of the TX2 (assuming the apsync configuration for the dataflash logger is configured accordingly).