GithubHelp home page GithubHelp logo

superz678 / fastsceneunderstanding Goto Github PK

View Code? Open in Web Editor NEW

This project forked from davyneven/fastsceneunderstanding

0.0 1.0 0.0 898 KB

segmentation, instance segmentation and single image depth

Lua 99.52% Shell 0.48%

fastsceneunderstanding's Introduction

Fast Scene Understanding

Torch implementation for simultaneous image segmentation, instance segmentation and single image depth. Two videos can be found here and here.

Example:

If you use this code for your research, please cite our papers:

Fast Scene Understanding for Autonomous Driving
Davy Neven, Bert De Brabandere, Stamatios Georgoulis, Marc Proesmans and Luc Van Gool
Published at "Deep Learning for Vehicle Perception", workshop at the IEEE Symposium on Intelligent Vehicles 2017

and

Semantic Instance Segmentation with a Discriminative Loss Function
Bert De Brabandere, Davy Neven and Luc Van Gool
Published at "Deep Learning for Robotic Vision", workshop at CVPR 2017

Setup

Prerequisites

Torch dependencies:

Data dependencies:

Download Cityscapes and run the script createTrainIdLabelImgs.py and createTrainIdInstanceImgs.py to create annotations based on the training labels. Make sure that the folder is named cityscapes

Afterwards create following txt files:

cd CITYSCAPES_FOLDER

ls leftImg8bit/train/*/*.png > trainImages.txt
ls leftImg8bit/val/*/*.png > valImages.txt

ls gtFine/train/*/*labelTrainIds.png > trainLabels.txt
ls gtFine/val/*/*labelTrainIds.png.png > valLabels.txt

ls gtFine/train/*/*instanceTrainIds.png > trainInstances.txt
ls gtFine/val/*/*instanceTrainIds.png.png > valInstances.txt

ls disparity/train/*/*.png > trainDepth.txt
ls disparity/val/*/*.png.png > valDepth.txt

Download pretrained model

To download both the pretrained segmentation model for training and a trained model for testing, run:

sh download.sh

Test pretrained model

To test the pretrained model, make sure you downloaded both the pretrained model and the Cityscapes dataset (+ scripts and txt files, see above). After, run:

qlua test.lua -data_root CITYSCAPES_ROOT with CITYSCAPES_ROOT the folder where cityscapes is located. For other options, see test_opts.lua

Train your own model

To train your own model, run:

qlua main.lua -data_root CITYSCAPES_ROOT -save true -directory PATH_TO_SAVE

For other options, see opts.lua

Tensorflow code

A third party tensorflow implementation of our loss function applied to lane instance segmentation is available from Hanqiu Jiang's github repository.

fastsceneunderstanding's People

Contributors

davyneven avatar dbbert avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.