Author: Zegang Cheng
REMAINDER The project is in alpha stage, and it is still under heavy development, so the performance is not guaranteed.
This is a Differentiable Distributed Visual-SLAM (Simultaneous localization and mapping) System built on Pytorch. It is NOT aimed for embedded system, on the contrary, the project is designed to fully utilize the power of Big Data and Distributed Computing System like NYU HPC Greene.
Currently, only equirectangular images are supported.
To fully utilize the power of Multiple GPUs, the project is designed under the philosophy of the so-called "Actor Model", where each computing actor only has access to its own data, and the messages (throw Websockets) amoung others. Currently, the logic is implemented in a proof-of-concept and naive way, which will be upgraded with some industrial-level infrastructures in the future (e.g. using Ray).
- Multi-Process Actor Model (Proof-of-Concept)
- Graph Database (Proof-of-Concept)
- Nuxt.js & Vue.js & THREE.js Visualization (Proof-of-Concept)
- Naive Bundle Adjustment with Gradient-based Optimization
- Pose Graph Optimization
- Loop Closure Detection
- Semantic SLAM, Global/Local Map Optimization, Deep Learning-based Methods, etc.
- Parallel-And-Distributed of all the above
torchslam
is distributed under the terms of the MIT license.