GithubHelp home page GithubHelp logo

ns031008 / mseg-semantic Goto Github PK

View Code? Open in Web Editor NEW

This project forked from mseg-dataset/mseg-semantic

0.0 1.0 0.0 3.43 MB

An Official Repo of CVPR '20 "MSeg: A Composite Dataset for Multi-Domain Segmentation"

License: MIT License

Python 100.00%

mseg-semantic's Introduction

Build Status Try out our models in Google Colab on your own images!

This repo includes the semantic segmentation pre-trained models, training and inference code for the paper:

MSeg: A Composite Dataset for Multi-domain Semantic Segmentation (CVPR 2020, Official Repo) [PDF]
John Lambert*, Zhuang Liu*, Ozan Sener, James Hays, Vladlen Koltun
Presented at CVPR 2020. Link to MSeg Video (3min)

This repo is the second of 4 repos that introduce our work. It provides utilities to train semantic segmentation models, using a HRNet-W48 or PSPNet backbone, sufficient to train a winning entry on the WildDash benchmark.

  • mseg-api: utilities to download the MSeg dataset, prepare the data on disk in a unified taxonomy, on-the-fly mapping to a unified taxonomy during training.

Two additional repos will be introduced in June 2020:

  • mseg-panoptic: provides Panoptic-FPN and Mask-RCNN training, based on Detectron2
  • mseg-mturk: provides utilities to perform large-scale Mechanical Turk re-labeling

Dependencies

Install the mseg model from mseg-api

Install the MSeg-Semantic module:

  • mseg_semantic can be installed as a python package using

      pip install -e /path_to_root_directory_of_the_repo/
    

Make sure that you can run import mseg_semantic in python, and you are good to go!

MSeg Pre-trained Models

Each model is 528 MB in size. We provide download links and multi-scale testing results below:

Nicknames: VOC = PASCAL VOC, WD = WildDash, SN = ScanNet

Model Training Set Training
Taxonomy
VOC
mIoU
PASCAL
Context
mIoU
CamVid
mIoU
WD
mIoU
KITTI
mIoU
SN
mIoU
h. mean Download
Link
MSeg (1M) MSeg train Universal 70.7 42.7 83.3 62.0 67.0 48.3 59.2 Google Drive
MSeg (3M) MSeg train Universal 72.0 44.0 84.4 59.9 66.5 49.4 59.7 Google Drive

Inference: Using our pre-trained models

We show how to perform inference here in our Google Colab.

Multi-scale inference greatly improves the smoothness of predictions, therefore our demos scripts use multi-scale config by default. While we train at 1080p, our predictions are often visually better when we feed in test images at 360p resolution.

If you have video input, and you would like to make predictions on each frame in the universal taxonomy, please set:

input_file=/path/to/my/video.mp4

If you have a set of images in a directory, and you would like to make a prediction in the universal taxonomy for each image, please set:

input_file=/path/to/my/directory

If you have as input a single image, and you would like to make a prediction in the universal taxonomy, please set:

input_file=/path/to/my/image

Now, run our demo script:

model_name=mseg-3m
model_path=/path/to/downloaded/model/from/google/drive
config=mseg_semantic/config/test/default_config_360.yaml
python -u mseg_semantic/tool/universal_demo.py \
  --config=${config} model_name ${model_name} model_path ${model_path} input_file ${input_file}

If you would like to make predictions in a specific dataset's taxonomy, e.g. Cityscapes, for the RVC Challenge, please run: (will be added)

Citing MSeg

If you find this code useful for your research, please cite:

@InProceedings{MSeg_2020_CVPR,
author = {Lambert, John and Zhuang, Liu and Sener, Ozan and Hays, James and Koltun, Vladlen},
title = {{MSeg}: A Composite Dataset for Multi-domain Semantic Segmentation},
booktitle = {Computer Vision and Pattern Recognition (CVPR)},
year = {2020}
}

Many thanks to Hengshuang Zhao for his semseg repo, which we've based much of this repository off of.

Other baseline models from our paper:

Individually-trained models that serve as baselines:

Nicknames: VOC = PASCAL VOC, WD = WildDash, SN = ScanNet

Model Training Set Training
Taxonomy
VOC
mIoU
PASCAL
Context
mIoU
CamVid
mIoU
WD
mIoU
KITTI
mIoU
SN
mIoU
h. mean Download
Link
ADE20K (1M) ADE20K train Universal 34.6 24.0 53.5 37.0 44.3 43.8 37.1 Google Drive
BDD (1M) BDD train Universal 13.5 6.9 71.0 52.1 55.0 1.4 6.1 Google Drive
Cityscapes (1M ) Cityscapes train Universal 12.1 6.5 65.3 30.1 58.1 1.7 6.7 Google Drive
COCO (1M) COCO train Universal 73.7 43.1 56.6 38.9 48.2 33.9 46.0 Google Drive
IDD (1M) IDD train Universal 14.5 6.3 70.5 40.6 50.7 1.6 6.5 Google Drive
Mapillary (1M) Mapillary train Universal 22.0 13.5 82.5 55.2 68.5 2.1 9.2 Google Drive
SUN RGB-D (1M) SUN RGBD train Universal 10.2 4.3 0.1 1.4 0.7 42.2 0.3 Google Drive
Naive Mix Baseline (1M) MSeg train. Naive Google Drive
Oracle (1M) 77.0 46.0 79.1 โ€“ 57.5 62.2 โ€“
Oracle Model
Download Links
VOC
2012
1M
Model
PASCAL
Context
1M
Model
Camvid
1M
Model
N/A** KITTI
1M
Model
ScanNet-20
1M
Model
-- --

Note that the output number of classes for 7 of the models listed above will be identical (194 classes). These are the models that represent a single training dataset's performance -- ADE20K (1M), BDD (1M), Cityscapes (1M ), COCO (1M), IDD (1M), Mapillary (1M), SUN RGB-D (1M). When we train a baseline model on a single dataset, we train it in the universal taxonomy (w/ 194 classes). If we did not do so, we would need to specify 7*6=42 mappings (which would be unbelievably tedious and also fairly redundant) since we measure each's performance according to zero-shot cross-dataset generalization -- 7 training datasets with their own taxonomy, and each would need its own mapping to each of the 6 test sets.

By training each baseline that is trained on a single training dataset within the universal taxonomy, we are able to specify just 7+6=13 mappings in this table (each training dataset's taxonomy->universal taxonomy, and then universal taxonomy to each test dataset).

**WildDash has no training set, so an "oracle" model cannot be trained.

Experiment Settings

We use the HRNetV2-W48 architecture. All images are resized to 1080p (shorter size=1080) at training time before a crop is taken.

We run inference with the shorter side of each test image at three resolutions (360p, 720p, 1080p), and take the max among these 3 possible resolutions. Note that in the original semseg repo, the author specifies the longer side of an image, whereas we specify the shorter side. Batch size is set to 35.

We generally follow the recommendations of Zhao et al.: Our data augmentation consists of random scaling in the range [0.5,2.0], random rotation in the range [-10,10] degrees. We use SGD with momentum 0.9, weight decay of 1e-4. We use a polynomial learning rate with power 0.9. Base learning rate is set to 1e-2. An auxiliary cross-entropy (CE) loss is added to intermediate activations, a linear combination with weight 0.4. In our data, we use 255 as an ignore/unlabeled flag for the CE loss. We use Pytorch's Distributed Data Parallel (DDP) package for multiprocessing, with the NCCL backend. We use apex opt_level: 'O0' and use a crop size of 713x713, with synchronized BN.

Training Instructions

Download the HRNet Backbone Model here from the original authors' OneDrive. We use 8 Quadro RTX 6000 cards, each w/ 24 GB of RAM, for training.

mseg-semantic's People

Contributors

mseg-dataset avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.