GithubHelp home page GithubHelp logo

ai4ce / deepexplorer Goto Github PK

View Code? Open in Web Editor NEW
21.0 3.0 2.0 367.53 MB

[RSS2023] Metric-Free Exploration for Topological Mapping by Task and Motion Imitation in Feature Space

Home Page: https://ai4ce.github.io/DeepExplorer/

License: Apache License 2.0

Python 100.00%
embodied-ai habitat-sim task-and-motion-planning topological-map visual-exploration visual-navigation

deepexplorer's Introduction

Yuhang He*, Irving Fang*, Yiming Li, Rushi Bhavesh Shah, Chen Feng

Abstract

We propose DeepExplorer, a simple and lightweight metric-free exploration method for topological mapping of unknown environments. It performs task and motion planning (TAMP) entirely in image feature space. The task planner is a recurrent network using the latest image observation sequence to hallucinate a feature as the next-best exploration goal. The motion planner then utilizes the current and the hallucinated features to generate an action taking the agent towards that goal. Our novel feature hallucination enables imitation learning with deep supervision to jointly train the two planners more efficiently than baseline methods. During exploration, we iteratively call the two planners to predict the next action, and the topological map is built by constantly appending the latest image observation and action to the map and using visual place recognition (VPR) for loop closing. The resulting topological map efficiently represents an environment's connectivity and traversability, so it can be used for tasks such as visual navigation. We show DeepExplorer's exploration efficiency and strong sim2sim generalization capability on large-scale simulation datasets like Gibson and MP3D. Its effectiveness is further validated via the image-goal navigation performance on the resulting topological map. We further show its strong zero-shot sim2real generalization capability in real-world experiments.

Project Website

Please visit this website for more information like video presentation and visualization.

Environment Setup

Please refer to the requirements.txt for the environment we run our experiment in.

Note that we run all our experiments in habitat-sim and habitat-lab on version 0.1.6. The hash for habitat-sim=0.1.6 is 781f787. The hash for habitat-lab=0.1.6 is ac937fd. Theoretically, the version between habitat-sim and habitat-lab should match, but we never experimented with this.

You could potentially run our code in a higher version of habitat since the software should be fairly backward compatible, but we noticed that some configuration files need to be manually changed to implement the 360 panorama camera we used, so we decided to stick with an old version of habitat

Because of the old habitat version, our Python version is stuck with `3.6``.

I wrote this tutorial on installing habitat for our lab's junior members. It focuses on installing it in an HPC environment, but it may still be helpful for you.

TL;DR

train contains all the code for training exploration contains all the code for exploration inference navigation contains all the code for navigation inference

Data Preparation

To generate the expert demonstration data we used for training, please refer to the experdemon_datacollect folder

Exploration Training

The DeepExplorer model is defined in models/motion_task_joint_planner.py. The script to train it can be found in train/train_explorer.py.

The dataset that we wrote to handle the expert demo data can be found in train/DataProvider_explorer.py.

The pre-trained model can be found in pretrained_model folder.

Exploration Inference

In the exploration folder, you can find the scripts to run the exploration on Gibson and MP3D with explore_[dataset].py. To calculate the coverage ratio, the script coverage_ratio_area_[dataset].py` can be used.

We adopted two different scripts for the two datasets to accommodate some differences, such as floor numbers.

Navigation

Action Assigner

In train/train_actassigner.py, you can train the ActionAssigner that we used in the map completion stage when we enhance our topological map with visual similarity.

The dataset for this can be found in /train/DataProvider_actassigner.py, and the model is defined in models/action_assigner.py

The pre-trained model can be found in pretrained_model folder.

Navigation Pipeline

For the navigation pipeline, including running VPR to enhance the topological map and actually perform navigation, please refer to the navigation folder

Pretrained Models

All the pretrained models can be found in pretrained_model folder

Citations

If you found our work helpful, please use the following information to cite our work

@INPROCEEDINGS{He-RSS-23, 
  AUTHOR    = {Yuhang He AND Irving Fang AND Yiming Li AND Rushi Bhavesh Shah 
  AND Chen Feng}, 
  TITLE     = {{Metric-Free Exploration for Topological Mapping by Task and 
  Motion Imitation in Feature Space}}, 
  BOOKTITLE = {Proceedings of Robotics: Science and Systems}, 
  YEAR      = {2023}, 
  ADDRESS   = {Daegu, Republic of Korea}, 
  MONTH     = {July}, 
  DOI       = {10.15607/RSS.2023.XIX.099} 
} 

deepexplorer's People

Contributors

irvingf7 avatar roboticsyimingli avatar yuhanghe01 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

deepexplorer's Issues

How to plot All Anchor Points

Hi,
I wanna ask that how did you plot all anchor points in a environment shownas Fig.4A of your paper?
Best wishes
Hugh

Expert Demonstration Data Collection

Hi @yuhanghe01 ,
I tried to collect data by using your collect_expert_demon_data.py. However, all .png images are black (i.e., values are 0).
I'll be appreciate if you can help to solve this issue.
Best wishes
Hugh

Problems of running collect_expert_demon_data.py

Hi,
I followed your tutorial on installing habitat for our lab's junior member.
When I try to run your collect_expert_demon_data.py, it shows following:

Traceback (most recent call last):
  File "collect_expert_demon_data.py", line 12, in <module>
    from habitat_baselines.utils.common import batch_obs
  File "/home/hugh/Data_Collection/habitat-lab/habitat_baselines/__init__.py", line 7, in <module>
    from habitat_baselines.common.base_il_trainer import BaseILTrainer
  File "/home/hugh/Data_Collection/habitat-lab/habitat_baselines/common/base_il_trainer.py", line 10, in <module>
    from habitat_baselines.common.base_trainer import BaseTrainer
  File "/home/hugh/Data_Collection/habitat-lab/habitat_baselines/common/base_trainer.py", line 14, in <module>
    from habitat_baselines.common.tensorboard_utils import TensorboardWriter
  File "/home/hugh/Data_Collection/habitat-lab/habitat_baselines/common/tensorboard_utils.py", line 11, in <module>
    from torch.utils.tensorboard import SummaryWriter
  File "/home/hugh/anaconda3/envs/habitat/lib/python3.6/site-packages/torch-1.10.2-py3.6-linux-x86_64.egg/torch/utils/tensorboard/__init__.py", line 13, in <module>
    from .writer import FileWriter, SummaryWriter  # noqa: F401
  File "/home/hugh/anaconda3/envs/habitat/lib/python3.6/site-packages/torch-1.10.2-py3.6-linux-x86_64.egg/torch/utils/tensorboard/writer.py", line 13, in <module>
    from tensorboard.summary.writer.event_file_writer import EventFileWriter
  File "/home/hugh/anaconda3/envs/habitat/lib/python3.6/site-packages/tb_nightly-2.17.0a20240303-py3.6.egg/tensorboard/summary/__init__.py", line 22, in <module>
    from tensorboard.summary import v1  # noqa: F401
  File "/home/hugh/anaconda3/envs/habitat/lib/python3.6/site-packages/tb_nightly-2.17.0a20240303-py3.6.egg/tensorboard/summary/v1.py", line 23, in <module>
    from tensorboard.plugins.histogram import summary as _histogram_summary
  File "/home/hugh/anaconda3/envs/habitat/lib/python3.6/site-packages/tb_nightly-2.17.0a20240303-py3.6.egg/tensorboard/plugins/histogram/summary.py", line 35, in <module>
    from tensorboard.plugins.histogram import summary_v2
  File "/home/hugh/anaconda3/envs/habitat/lib/python3.6/site-packages/tb_nightly-2.17.0a20240303-py3.6.egg/tensorboard/plugins/histogram/summary_v2.py", line 35, in <module>
    from tensorboard.util import tensor_util
  File "/home/hugh/anaconda3/envs/habitat/lib/python3.6/site-packages/tb_nightly-2.17.0a20240303-py3.6.egg/tensorboard/util/tensor_util.py", line 20, in <module>
    from tensorboard.compat.tensorflow_stub import dtypes, compat, tensor_shape
  File "/home/hugh/anaconda3/envs/habitat/lib/python3.6/site-packages/tb_nightly-2.17.0a20240303-py3.6.egg/tensorboard/compat/tensorflow_stub/__init__.py", line 25, in <module>
    from . import app  # noqa
  File "/home/hugh/anaconda3/envs/habitat/lib/python3.6/site-packages/tb_nightly-2.17.0a20240303-py3.6.egg/tensorboard/compat/tensorflow_stub/app.py", line 21, in <module>
    from . import flags
  File "/home/hugh/anaconda3/envs/habitat/lib/python3.6/site-packages/tb_nightly-2.17.0a20240303-py3.6.egg/tensorboard/compat/tensorflow_stub/flags.py", line 25, in <module>
    from absl.flags import *  # pylint: disable=wildcard-import
  File "/home/hugh/anaconda3/envs/habitat/lib/python3.6/site-packages/absl_py-2.1.0-py3.6.egg/absl/flags/__init__.py", line 35, in <module>
    from absl.flags import _argument_parser
  File "/home/hugh/anaconda3/envs/habitat/lib/python3.6/site-packages/absl_py-2.1.0-py3.6.egg/absl/flags/_argument_parser.py", line 82, in <module>
    class ArgumentParser(Generic[_T], metaclass=_ArgumentParserCache):
TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases

I not sure did I install correctly or what specific problem that I encounterd?
Best regards
Hugh

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.