GithubHelp home page GithubHelp logo

ziadalh / zero_experience_required Goto Github PK

View Code? Open in Web Editor NEW
27.0 2.0 2.0 859 KB

Zero Experience Required: Plug & Play Modular Transfer Learning for Semantic Visual Navigation. CVPR 2022

Home Page: https://vision.cs.utexas.edu/projects/zsel/

License: MIT License

Python 100.00%
visual-navigation imagenav objectnav roomnav zero-shot-experience-learning

zero_experience_required's Introduction

Zero Experience Required

This repository contains a PyTorch implementation of our CVPR 2022 paper:

Zero Experience Required: Plug & Play Modular Transfer Learning for Semantic Visual Navigation
Ziad Al-Halah, Santhosh K. Ramakrishnan, Kristen Grauman
The University of Texas at Austin, Facebook AI Research

Project website: https://vision.cs.utexas.edu/projects/zsel

Abstract

In reinforcement learning for visual navigation, it is common to develop a model for each new task, and train that model from scratch with task-specific interactions in 3D environments. However, this process is expensive; massive amounts of interactions are needed for the model to generalize well. Moreover, this process is repeated whenever there is a change in the task type or the goal modality. We present a unified approach to visual navigation using a novel modular transfer learning model. Our model can effectively leverage its experience from one source task and apply it to multiple target tasks (e.g., ObjectNav, RoomNav, ViewNav) with various goal modalities (e.g., image, sketch, audio, label). Furthermore, our model enables zero-shot experience learning, whereby it can solve the target tasks without receiving any task-specific interactive training. Our experiments on multiple photorealistic datasets and challenging tasks show that our approach learns faster, generalizes better, and outperforms SoTA models by a significant margin.

Installation

Clone the current repository and required submodules:

git clone [email protected]:ziadalh/zero_experience_required.git
cd zero_experience_required
git submodule init
git submodule update
export ZER_ROOT=$PWD

Create conda environment:

conda create --name zer python=3.6.10
conda activate zer

Install pytorch:

conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch

Install other requirements for this repository:

pip install -r requirements.txt

Install habitat-lab and habitat-sim:

cd $ZER_ROOT/dependencies/habitat-lab
pip install -r requirements.txt
python setup.py develop --all

cd $ZER_ROOT/dependencies/habitat-sim
pip install -r requirements.txt
python setup.py install --headless --with-cuda

Datasets

You can download the datasets used in this work from the following table:

Task Dataset Split File Install Path
ImageNav Gibson train imagenav_gibson_train $ZER_ROOT/data/datasets/zer/imagenav/gibson/v1/
ImageNav Gibson val imagenav_gibson_val $ZER_ROOT/data/datasets/zer/imagenav/gibson/v1/
ImageNav HM3D val imagenav_hm3d_val $ZER_ROOT/data/datasets/zer/imagenav/hm3d/v1/
ImageNav MP3D test imagenav_mp3d_test $ZER_ROOT/data/datasets/zer/imagenav/mp3d/v1/
ObjectNav Gibson train objectnav_gibson_train $ZER_ROOT/data/datasets/zer/objectnav/gibson/v1/
ObjectNav Gibson val objectnav_gibson_val $ZER_ROOT/data/datasets/zer/objectnav/gibson/v1/

Download the respective scenes from Gibson, HM3D, and Matterport3D. Save (or link) the scenes under $ZER_ROOT/data/scene_datasets/<DATASET_NAME> where <DATASET_NAME> is gibson, hm3d, or mp3d.

Pretrained models

Download the pretrained model from here:

Task Training Data Model
ImageNav Gibson imagenav_gibson

Evaluating ImageNav Models

The evaluation configurations are provided for our ImageNav model in config/imagenav/eval_ppo_imagenav_rgb.yaml

Make sure that the datasets paths are correct in DATASET.DATA_PATH of the respective dataset configuration file in config/imagenav/

To evaluate our ImageNav model on Gibson <SPLIT_NAME> (val_easy, val_medium, val_hard) split, run the following command:

python -u run.py \
  --exp-config config/imagenav/eval_ppo_imagenav_rgb.yaml \
  --run-type eval \
  --output-dir <OUTPUT_DIR>  \
  EVAL.SPLIT <SPLIT_NAME> \
  EVAL_CKPT_PATH_DIR <PATH_TO_IMAGENAV_GIBSON_MODEL>

For cross-evaluation on HM3D <SPLIT_NAME> (val_easy, val_medium, val_hard):

python -u run.py \
  --exp-config config/imagenav/eval_ppo_imagenav_rgb.yaml \
  --run-type eval \
  --output-dir <OUTPUT_DIR>  \
  EVAL.SPLIT <SPLIT_NAME> \
  EVAL_CKPT_PATH_DIR <PATH_TO_IMAGENAV_GIBSON_MODEL> \
  BASE_TASK_CONFIG_PATH config/imagenav/hm3d/imagenav_rgb.yaml

For cross-evaluation on MP3D <SPLIT_NAME> (test_easy, test_medium, test_hard):

python -u run.py \
  --exp-config config/imagenav/eval_ppo_imagenav_rgb.yaml \
  --run-type eval \
  --output-dir <OUTPUT_DIR>  \
  EVAL.SPLIT <SPLIT_NAME> \
  EVAL_CKPT_PATH_DIR <PATH_TO_IMAGENAV_GIBSON_MODEL> \
  BASE_TASK_CONFIG_PATH config/imagenav/mp3d/imagenav_rgb.yaml

Zero-Shot Experience Learning

We will release soon the code and data related to the zero-shot experience learning (ZSEL) experiments in the paper.

Acknowledgements

In our work, we used parts of Habitat Lab and extended it. Some of the ImageNav datasets are adapted from Hahn et al. and Mezghani et al.. Please see our paper for details.

Citation

@inproceedings{al-halah2022zsel,
    author = {Ziad Al-Halah and Santhosh K. Ramakrishnan and Kristen Grauman},
    title = {{Zero Experience Required: Plug \& Play Modular Transfer Learning for Semantic Visual Navigation}},
    year = {2022},
    booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    arxivId = {2202.02440}
}

License

This project is released under the MIT license, as found in the LICENSE file.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.