This project was carried out as part of the career preparation project, the result of hard work and dedication, and is supervised by Ms. Ferdaous Chaaben. It is a simple discovery of the automotive world and ADAS systems.
The project was divided into three parts (object detection, Drivable area detection and integration of models in CARLA). So, after training and getting good results for the models, we can extract from a street image the elements around the vehicle and the drivable area. Then, while working on the CARLA (CAR Learning to Act) simulator, I generate APIs to get results on the simulator.
I worked on BDD100k dataset Berkley dataset which this drive link is the compressed data you will use for each model training
I am working on a subset of the dataset. I trained the U-Net model several times with different sizes and input parameters for binary semantic segmentation to extract the street. With some filters and traditional computer vision methods, I was able to extract the green lane where to drive safely. You find the trained weights in this link
Here is an example of the result (lines are well-marked):
For object detection, I trained YoLov4 on a custom dataset in google colab to get better results.
I worked on the official darknet framework, then extracted the trained weights to do the inference. You find the trained weights in this link
Here is an example of the result:
At the end of this project, I generated an API to run these models on CARLA.