Projects regarding the implementation of:
- sensor fusion of LiDAR, cameras, and IMU data;
- deep learning networks for semantic segmentation and depth estimation from 2D images;
- deep learning network for 3D object detection from point clouds.
Only the assignments are present, since the Professors asked to not share the code or the final reports.
Overall rating of the projects: 114% (bonus points thanks to optional assignments).
Visualize the outputs of common autonomous driving tasks such as 3D object detection and point cloud semantic segmentation given a LiDAR point cloud, corresponding RGB camera image, ground truth semantic labels and network bounding box predictions.
Moreover, identify each laser ID from the point cloud directly, and deal with the point cloud distortion caused by the vehicle motion with the aid of GPS/IMU data.
Further info: Assignment_Project1.pdf
Build Multi-Task Learning (MTL) architectures for dense prediction tasks, i.e. semantic segmentation and monocular depth estimation, exploiting joint architectures, branched architectures, and task distillation.
Finally, improve the network with personal ideas or refer to existing papers to enhance the predictions.
Further info: Assignment_Project2.pdf
Build a 2-stage 3D object detector to detect vehicles in autonomous driving scenes, i.e. to draw 3D bounding boxes around each vehicle. Unlike Project 2 which was based on 2D images, now irregular 3D point cloud data are exploited to detect vehicles.
The first stage, which is often referred to as the Region Proposal Network (RPN), is used to create coarse detection results from the irregular point cloud data. These initial detections are later refined in the second stage network to generate the final predictions.
Further info: Assignment_Project3.pdf