GithubHelp home page GithubHelp logo

linqiuxia-lynn / learning-from-synthetic-animals Goto Github PK

View Code? Open in Web Editor NEW

This project forked from jitengmu/learning-from-synthetic-animals

0.0 0.0 0.0 2.09 MB

Learning from Synthetic Animals (CVPR2020, oral)

License: GNU General Public License v3.0

Python 29.30% Jupyter Notebook 70.59% Shell 0.12%

learning-from-synthetic-animals's Introduction

screenshot

Learning from Synthetic Animals

Code for Learning from Synthetic Aniamls (CVPR 2020, oral). The code is developed based on the Pytorch framework(1.1.0) with python 3.7.3. This repo includes training code for consistency-constrained semi-supervised learning plus a synthetic animal dataset.

Citation

If you find our code or method helpful, please use the following BibTex entry.

@InProceedings{Mu_2020_CVPR,
author = {Mu, Jiteng and Qiu, Weichao and Hager, Gregory D. and Yuille, Alan L.},
title = {Learning From Synthetic Animals},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

Requirements

Installation

  1. Clone the repository with submodule.

    git clone https://github.com/JitengMu/Learning-from-Synthetic-Animals
    
  2. Go to directory Learning-from-Synthetic-Animals/ and create a symbolic link to the images directory of the animal dataset:

    ln -s PATH_TO_IMAGES_DIR ./animal_data
    
  3. Download and pre-processing datasets:

    • Download TigDog Dataset and move folder behaviorDiscovery2.0 to ./animal_data.
    • Run python get_cropped_TigDog.py to get cropped TigDog Dataset.
    • Download Synthetic Animal Dataset with script bash get_dataset.sh.

High level organization

  • ./data for train/val split and dataset statistics (mean/std).

  • ./train for training scripts.

  • ./evaluation for inference scripts

  • ./CC-SSL for consistency constrained semi-supervised learing.

  • ./pose for stacked hourglass model.

  • ./data_generation for synthetic animal dataset generation.

Demo

  1. Download the checkpoint with script bash get_checkpoint.sh and the structure looks like:
checkpoint    
│
└───real_animal
│      │   horse
│      └   tiger
│    
└───synthetic_animal
       │   horse
       │   tiger
       └   others
   
  1. Run the demo.ipynb to visualize predictions.

  2. Evaluate the accuracy on TigDog Dataset. (18 per joint accuracies are in the order of left-eye, right-eye, chin, left-front-hoof, right-front-hoof, left-back-hoof, right-back-hoof, left-front-knee, right-front-knee, left-back-knee, right-back-knee, left-shoulder, right-shoulder, left-front-elbow, right-front-elbow, left-back-elbow, right-back-elbow)

CUDA_VISIBLE_DEVICES=0 python ./evaluation/test.py --dataset1 synthetic_animal_sp --dataset2 real_animal_sp --arch hg --resume ./checkpoint/synthetic_animal/horse/horse_ccssl/synthetic_animal_sp.pth.tar --evaluate --animal horse

Train

  1. Training on synthetic animal dataset.
CUDA_VISIBLE_DEVICES=0 python train/train.py --dataset synthetic_animal_sp -a hg --stacks 4 --blocks 1 --image-path ./animal_data/ --checkpoint ./checkpoint/horse/syn --animal horse
  1. Generate pseudo-labels for TigDog dataset and jointly train on synthetic animal and TigDog datasets.
CUDA_VISIBLE_DEVICES=0 python CCSSL/CCSSL.py --num-epochs 60 --checkpoint ./checkpoint/horse/ssl/ --resume ./checkpoint/horse/syn/model_best.pth.tar --animal horse
  1. Evaluate the accuracy on TigDog Dataset using metric [email protected].
CUDA_VISIBLE_DEVICES=0 python ./evaluation/test.py --dataset1 synthetic_animal_sp --dataset2 real_animal_sp --arch hg --resume ./checkpoint/horse/ssl/synthetic_animal_sp.pth.tar --evaluate --animal horse

Please refer to TRAINING.md for detailed training recipes!

Generate synthetic animal dataset using Unreal Engine

  1. Download and install the unrealcv_binary for Linux (tested in Ubuntu 16.04) with bash get_unrealcv_binary.sh

  2. Run unreal engine.

DISPLAY= ./data_generation/unrealcv_binary/LinuxNoEditor/AnimalParsing/Binaries/Linux/AnimalParsing -cvport 9900
  1. Run the following script to generate images and ground truth (images, depths, keypoints)
python data_generation/unrealdb/example/animal_example/animal_data.py --animal horse --random-texture-path ./data_generation/val2017/ --use-random-texture --num-imgs 10

Generated data is saved in ./data_generation/generated_data/ by default.

  1. Run the following script to generate part segmentations (support horse, tiger)
python data_generation/generate_partseg.py --animal horse --dataset-path ./data_generation/generated_data/

Acknowledgement

learning-from-synthetic-animals's People

Contributors

jitengmu avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.