GithubHelp home page GithubHelp logo

balnarendrasapa / road-detection Goto Github PK

View Code? Open in Web Editor NEW
4.0 1.0 1.0 184.94 MB

This is a course project for DSCI-6011 - Deep Learning. deals with Drivable Area and lane segmentation for self driving cars

License: GNU General Public License v3.0

Jupyter Notebook 98.64% Python 1.35% Dockerfile 0.01%
neural-network pytorch deep-learning drivable-area-segmentation huggingface huggingface-datasets huggingface-spaces image-segmentation labelme lane-detection python gradio gradio-interface stable-diffusion annotation cnn

road-detection's Introduction

Drivable Area and Lane segmentation

This project is about detecting the Drivable area and detecting lanes on the road. This project is mainly about finetuning the model with the dataset we generated using stable diffusion

Dataset

  • This dataset contains images for Drivable Area segmentation and Lane detection. All the images are generated using Stable diffusion in Google Colaboratory. This dataset is around 90 Megabytes. The project we are working on has two label outputs for each sample. And these outputs are overlayed on the original image.
  • We've used stable diffusion to generate images for finetuning the model. click on the below badge to see how we worked with stable diffusion. The model we used is CompVis's stable-diffusion-v1-4 which can run on T4 GPU provided without any cost by google.

Open In Colab

Annotation

  • The images are annotated using labelme tool. Which is an opensource tool used to annotate image data. Each image is annotated twice one is for drivable area segmentation and another is for lane detection.

Labelme Annotation Tool

image

Click here to the labelme's github repo

Original Image

image

Annotation for Drivable area segmentation

image

Annotation for Lane detecton

image

Partitioning

The dataset is structured into three distinct partitions: Train, Test, and Validation. The Train split comprises 80% of the dataset, containing both the input images and their corresponding labels. Meanwhile, the Test and Validation splits each contain 10% of the data, with a similar structure, consisting of image data and label information. Within each of these splits, there are three folders:

  • Images: This folder contains the original images, serving as the raw input data for the task at hand.

  • Segments: Here, you can access the labels specifically designed for Drivable Area Segmentation, crucial for understanding road structure and drivable areas.

  • Lane: This folder contains labels dedicated to Lane Detection, assisting in identifying and marking lanes on the road.

Accessing the Dataset

The dataset that we annotated is available in this repository in the datasets folder as datasets.zip. And also the dataset is pushed to huggingface datasets and you can access the dataset like shown below

Python

Open In Colab

click on the above badge to see more on how to work with dataset

from datasets import load_dataset

dataset = load_dataset("bnsapa/road-detection")

This dataset is hosted on huggingface. click here to the dataset card

Through CLI

wget https://github.com/balnarendrasapa/road-detection/raw/master/datasets/dataset.zip

Training

Open In Colab

  • Click on the above badge to open jupyter notebook that demonstrates how the model is finetuned.

Transformation while Training

Random Perspective Transformation:

This transformation simulates changes in the camera's perspective, including rotation, scaling, shearing, and translation. It is applied with random parameters:

  • degrees: Random rotation in the range of -10 to 10 degrees.
  • translate: Random translation in the range of -0.1 to 0.1 times the image dimensions.
  • scale: Random scaling in the range of 0.9 to 1.1 times the original size.
  • shear: Random shearing in the range of -10 to 10 degrees.
  • perspective: A slight random perspective distortion.

HSV Color Augmentation:

  • This changes the hue, saturation, and value of the image.
  • Random gains for hue, saturation, and value are applied.
  • The hue is modified by rotating the color wheel.
  • The saturation and value are adjusted by multiplying with random factors.
  • This helps to make the model invariant to changes in lighting and color variations.

Image Resizing:

  • If the Images are not in the specified size, the images are resized to a fixed size (640x360) using cv2.resize.

Label Preprocessing:

  • The labels (segmentation masks) are thresholded to create binary masks. This means that pixel values are set to 0 or 255 based on a threshold (usually 1 in this case).
  • The binary masks are also inverted to create a binary mask for the background.
  • These binary masks are converted to PyTorch tensors for use in training the semantic segmentation model.

Loss

  • Tversky loss and Focal loss are used here. Total loss = Focal Loss + Tversky Loss

Optimization

  • In this setup, an Adam optimizer with a dynamically decreasing learning rate is employed. This adaptive learning rate is regulated using a Polynomial Learning Rate Scheduler, which gradually reduces the learning rate as the training progresses.

Deployment

  • The model is deployed on Huggingface spaces. click here to go there.
  • You can deploy the model locally as well. There are three methods to run the model they are listed below.

Docker-Compose

  • There is a docker image available with this repository. that is road-detection.
  • git clone this repo. and cd into deployment and run docker-compose up.
  • open http://localhost:7860/ in you browser to see the app

Docker

  • you can run the following command. This will download the image and deploy it. open http://localhost:7860/ in you browser to see the app.
docker run -p 7860:7860 -e SHARE=True ghcr.io/balnarendrasapa/road-detection:latest

Python Virtual Environment

  • cd into deployment directory. and run python -m venv .venv to create a virtual environment.
  • run pip install -r requirements.txt
  • run python app.py
  • open http://localhost:7860/ in you browser to see the app

References

[1] TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars, Authors: Quang Huy Che, Dinh Phuc Nguyen, Minh Quan Pham, Duc Khai Lam, Year: 2023. Click here to go the TwinLiteNet Repository

[2] The Stable Diffusion code is taken from here. click here

road-detection's People

Contributors

balnarendrasapa avatar dependabot[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

ajay6034

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.