More demos at bottom of page
- OpenCV, Scikit-learn and pytorch.
Create Conda Environment:
conda create -n lane_det python=3.8 -y
conda activate lane_det
Install required packages:
pip install -r requirements.txt
Check the Pytorch website to find the best method to install Pytorch in your computer.
To use without a GPU, normal versions of Pytorch will work. If setting the use_gpu flag to true, you must install CUDA before installing the CUDA enabled versions of Pytorch from the Pytorch website
The pretrained models (TuSimple and CULane) must be downloaded from Ultra-Fast-Lane-Detection and placed in the models folder for the code to run.
In the code for detection, you can change the model between the pretrained models using the CULane and TuSimple datasets. Currently using the r18 verions. We will need to figure out how to implement newer/different models, likely to be based on Ultra-Fast-Lane-Detection-V2 or CLRNet to improve results
- Input: RGB image of size 1280 x 720 pixels.
- Output: Keypoints for a maximum of 4 lanes (left-most lane, left lane, right lane, and right-most lane).
Image inference: Takes in provided images and displays the detected lane on the image. Place all test images in the Test_Images folder. The output of each photo is saved in Output
python imageLaneDetection.py
Video inference: Takes .mp4 videos saved in the Test_Videos folder and detects the lane on each frame. The output for each video is saved in Output as an .mp4
python videoLaneDetection.py
Webcam inference: Not Tested or editted from original
python webcamLaneDetection.py