GithubHelp home page GithubHelp logo

notmahi / dobb-e Goto Github PK

View Code? Open in Web Editor NEW
560.0 15.0 51.0 53.97 MB

Dobb·E: An open-source, general framework for learning household robotic manipulation

Home Page: https://dobb-e.com

License: MIT License

Python 1.11% Shell 0.01% G-code 98.89%
behavior-cloning imitation-learning robot-learning robotics

dobb-e's Introduction

preview

Dobb·E

arXiv License Code Style: Black PyTorch

Project webpage · Documentation (gitbooks) · Paper

Authors: Mahi Shafiullah*, Anant Rai*, Haritheja Etukuru, Yiqian Liu, Ishan Misra, Soumith Chintala, Lerrel Pinto

Open-source repository of the hardware and software components of Dobb·E and the associated paper, On Bringing Robots Home

dobb-e.mp4

Abstract

Throughout history, we have successfully integrated various machines into our homes - dishwashers, laundry machines, stand mixers, and robot vacuums are a few of the latest examples. However, these machines excel at performing a single task effectively. The concept of a “generalist machine” in homes - a domestic assistant that can adapt and learn from our needs, all while remaining cost-effective has long been a northstar in robotics that has been steadily pursued for decades. In this work, we initiate a large-scale effort towards this goal by introducing Dobb·E, an affordable yet versatile general-purpose system for learning robotic manipulation within household settings. Dobb·E can learn a new task with only five minutes of a user showing it how to, thanks to a demonstration collection tool (“The Stick”) we built out of cheap parts and iPhones. We use the Stick to collect 13 hours of data in 22 homes of New York City, and train Home Pretrained Representations (HPR). Then, in a novel home environment, with five minutes of demonstrations and fifteen minutes of adapting the HPR model, we show that Dobb·E can reliably solve the task on the Stretch, a mobile robot readily available in the market. Across roughly 30 days of experimentation in homes of New York City and surrounding areas, we test our system in 10 homes, with a total of 109 tasks in different environments, and finally achieve a success rate of 81%. Beyond success percentages, our experiments reveala plethora of unique challenges absent or ignored in lab-robotics, ranging fromeffects of strong shadows, to demonstration quality by non-expert users. With the hope of accelerating research on home robots, and eventually seeing robot butlers in every home, we open-source Dobb·E software stack and models, our data, and our hardware designs.

What's on this repo

Dobb·E is made out of four major components:

  1. A hardware tool, called The Stick, to comfortably collect robotic demonstrations in homes.
  2. A dataset, called Homes of New York (HoNY), with 1.5 million RGB-D frames. collected with the Stick across 22 homes and 216 environments of New York City.
  3. A pretrained lightweight foundational vision model called Home Pretrained Representations (HPR), trained on the HoNY dataset.
  4. Finally, the platform to tie it all together to deploy it in novel homes, where with only five minutes of training data and 15 minutes of fine-tuning HPR, Dobb·E can solve many simple household tasks.

Reflecting this structure, there are four folders in this repo, where:

  1. hardware contains our 3D printable STL files, as well as instructions on how to set up the Stick.
  2. stick-data-collection contains all the necessary software for processing any data you collect on the Stick.
  3. imitation-in-homes contains our code for training a policy on your collected data using our pretrained models, and also the code to pretrain a new model yourself.
  4. robot-server contains the code to be run on the robot to deploy the learned policies.

The primary documentation source is gitbooks at https://docs.dobb-e.com. There are also associated documentations inside each folder's READMEs.

Paper

paper_preview Get it from ArXiv or our website.

Citation

If you find any of our work useful, please cite us!

@misc{shafiullah2023dobbe,
      title={On Bringing Robots Home}, 
      author={Nur Muhammad Mahi Shafiullah and Anant Rai and Haritheja Etukuru and Yiqian Liu and Ishan Misra and Soumith Chintala and Lerrel Pinto},
      year={2023},
      eprint={2311.16098},
      archivePrefix={arXiv},
      primaryClass={cs.RO}
}

dobb-e's People

Contributors

haritheja-e avatar notmahi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dobb-e's Issues

'NoneType' object is not iterable during finetune training

Hi team,
Thank you so much for open sourcing such an amazing project! I thought this would be my perfect opportunity to learn about behavioral cloning of robots.
However, during the learning process, I encountered an error message of unknown reason. I tried to change the order of imports as mentioned in issue 4(#4). This worked, but a new error was prompted.

My error message is as follows, could you please check it for me:


$ python train.py --config-name=finetune
/home/me/dobbe/dobb-e/imitation-in-homes
checkpoints/2024-03-20/Drawer_Opening-Env1-moco-18-46-33
[2024-03-20 18:46:33,636][main][INFO] - Working in /home/me/dobbe/dobb-e/imitation-in-homes
[2024-03-20 18:46:34,283][main][INFO] - Setting up dataloaders.
Filtering trajectories: 100%|███████████████| 81/81 [00:00<00:00, 159426.85it/s]
/home/me/dobbe/dobb-e/imitation-in-homes/dataloaders/utils.py:283: UserWarning: No trajectories were found matching the specified criteria. Make sure you have the correct dataset root and trajectory roots, and the correct include/exclude criteria.
warnings.warn(
Filtering trajectories: 100%|███████████████| 81/81 [00:00<00:00, 172106.70it/s]
[2024-03-20 18:46:35,192][timm.models._builder][INFO] - Loading pretrained weights from Hugging Face hub (notmahi/dobb-e)
[2024-03-20 18:46:35,481][timm.models._hub][INFO] - [notmahi/dobb-e] Safe alternative available for 'pytorch_model.bin' (as 'model.safetensors'). Loading weights using safetensors.
Using static action normalization
act_mean: tensor([ 7.2950e-05, 5.2102e-03, 9.5942e-04, -1.9970e-04, -1.5271e-03,
3.3574e-04, 0.0000e+00])
act_std: tensor([0.0144, 0.0221, 0.0110, 0.0158, 0.0465, 0.0253, 1.0000])
0it [00:00, ?it/s] | 0/50 [00:00<?, ?it/s]
Epoch 1: 0%| | 0/50 [00:00<?, ?it/s]
Error executing job with overrides: []
Traceback (most recent call last):
File "/home/me/dobbe/dobb-e/imitation-in-homes/train.py", line 414, in main
workspace.run()
File "/home/me/dobbe/dobb-e/imitation-in-homes/train.py", line 194, in run
self._train_epoch()
File "/home/me/dobbe/dobb-e/imitation-in-homes/train.py", line 223, in _train_epoch
self._train_step(batch, overall_loss_dict, iterator)
File "/home/me/dobbe/dobb-e/imitation-in-homes/train.py", line 240, in _train_step
batch_in_device = self.to_device(batch)
File "/home/me/dobbe/dobb-e/imitation-in-homes/train.py", line 204, in to_device
manyTensors = [
TypeError: 'NoneType' object is not iterable


Besides,My env_var.yaml settings are as follows, are there any mistakes?(I did not modify the content in finetune.yaml)


home_ssl_data_root: /home/me/dobbe/dobb-e/imitation-in-homes/iphone_data # root to home ssl data folder
finetune_task_data_root: /home/me/dobbe/dobb-e/imitation-in-homes/finetune_directory # root to the exported finetuning data folder

home_ssl_data_original_root: /home/me/dobbe/dobb-e/imitation-in-homes/iphone_data # path included in the r3d_files.txt file of dataset
finetune_task_data_original_root: /home/me/dobbe/dobb-e/imitation-in-homes/finetune_directory # path included in r3d_files.txt of dataset (may be the same as finetune_task_data_root if data has not been moved nor transferred across machines)
project_root: /home/me/dobbe/dobb-e/imitation-in-homes # path to this repo

wandb:
entity: null # wandb username


Below is my file path
image

I would be very grateful if you could help me.

Ask for camera information

Hi, thanks for you great work!

May I ask where are the camera intrinsics and extrinsics? I failed to find them in the dataset.

Thanks!

Segmentation fault while running train.py

HI, I run the train.py following the imitation-in-homes README

I download all the datasets and change the path in env_vars.yaml. Other than that, all configurations are default.
but when I run the command
python train.py --config-name=finetune

The logs show:

[2024-02-06 15:48:42,360][main][INFO] - Working in ./dobb-e/imitation-in-homes
[2024-02-06 15:48:43,065][main][INFO] - Setting up dataloaders.
Filtering trajectories: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:00<00:00, 132814.16it/s]
./dobb-e/imitation-in-homes/dataloaders/utils.py:283: UserWarning: No trajectories were found matching the specified criteria. Make sure you have the correct dataset root and trajectory roots, and the correct include/exclude criteria.
warnings.warn(
Filtering trajectories: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:00<00:00, 155700.56it/s]
[2024-02-06 15:48:43,941][timm.models._builder][INFO] - Loading pretrained weights from Hugging Face hub (notmahi/dobb-e)
[2024-02-06 15:48:44,306][timm.models._hub][INFO] - [notmahi/dobb-e] Safe alternative available for 'pytorch_model.bin' (as 'model.safetensors'). Loading weights using safetensors.
Using static action normalization
act_mean: tensor([ 7.2950e-05, 5.2102e-03, 9.5942e-04, -1.9970e-04, -1.5271e-03,
3.3574e-04, 0.0000e+00])
act_std: tensor([0.0144, 0.0221, 0.0110, 0.0158, 0.0465, 0.0253, 1.0000])
Fatal Python error: Segmentation fault

I use faulthandler and find that the error occurred in
self.model = self.model.to(self.device)

camera intrinsics

Thanks for your awesome works! I am currently engaged in transforming the depth image into a 3D representation. However, I have not been able to locate the camera intrinsics in your dataset, HoNY. Could you provide them?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.