Inclusion@RSS 2021 Fellow | Robotics Science & Systems
I received Inclusion@RSS Fellowship [ 44 Fellows list ] | RSS in YouTube
Robotic Mars Exploration: Recent Results and Future Prospects by Larry Matthies [NASA-JPL] [ Read , Keynote ]
I am interested and taking notes of the following workshops at RSS 2021:
-
Integrating Planning and Learning [website]
-
Advancing Artificial Intelligence and Manipulation for Robotics: Understanding Gaps, Industry and Academic Perspectives,and Community Building [website]
- Representation Learning for Interaction Tasks NOTES by Danica Kragic (KTH)
- Supervised Local Autonomy for Mobile Manipulation with Spot NOTES by Al Rizzi (Boston Dynamics)
- Online Recovery from Failure NOTES by Jeannette Bohg (Stanford)
-
Declarative and Neurosymbolic Representations in Robot Learning and Control [website]
- Acting, Learning, and Knowing in Large-Scale Space NOTES by Prof. Benjamin Kuipers
- Human-like planning for reaching in cluttered environments NOTES by Prof. Anthony G Cohn
- Mission Planning with Uncertain Models NOTES by Prof. Nick Hawes
- Signal to Symbol (via Skills) NOTES by Prof. George Konidaris
- Rich Representations for Rational Robots NOTES by Prof. Leslie P. Kaelbling
-
Geometry and Topology in Robotics: Learning, Optimization, Planning, and Control [website]
- The geometry of nonlinear oscillation modes NOTES by Prof. Alin Albu-Schäffer.
- Riemannian manifolds learned from data NOTES by Dr. Georgios Arvanitidis.
- Bridging Topology and Geometry in Deformable Object Manipulation NOTES by Prof. Jeannette Bohg
- Gaussian Processes on Riemannian Manifolds for Robotics NOTES by Alexander Terenin & Viacheslav Borovitskiy
- Geometric Deep Learning: The Erlangen Programme of ML NOTES ,video by Prof. Michael Bronstein
- Distortion, on the Average and in Expectation NOTES by Prof. Herbert Edelsbrunner.
- The Role of Topology in Robotics NOTES by Prof. Daniel E. Koditschek.
- Safe and Efficient Robot Learning Using Riemannian Motion Policies NOTES by Anqi Li
I am interested and taking notes of the following 15 papers at RSS 2021:
- Manipulator-Independent Representations for Visual Imitation [paper, website] by DeepMind.
- Optimal Pose and Shape Estimation for Category-level 3D Object Perception [paper, video] by MIT.
- Policy Transfer across Visual and Dynamics Domain Gaps via Iterative Grounding [paper, video] by USC
- An Empowerment-based Solution to Robotic Manipulation Tasks with Sparse Rewards [paper, video] by MIT
- NeuroBEM: Hybrid Aerodynamic Quadrotor Model [paper, video, code] by University of Zurich.
- Learning Generalizable Robotic Reward Functions from “In-The-Wild” Human Videos [paper, video] by Stanford University.
- Untangling Dense Non-Planar Knots by Learning Manipulation Features and Recovery Policies [paper,video] by UC Berkeley.
- TARE: A Hierarchical Framework for Efficiently Exploring Complex 3D Environments [paper, video, website, code] by CMU.
- STEP: Stochastic Traversability Evaluation and Planning for Risk-Aware Off-road Navigation [paper, video] by GTech, NASA-JPL, Caltech as used in Boston Dynamic Spot in this video.
- Language Conditioned Imitation Learning Over Unstructured Data [paper, video] by Google.
- HJB-RL: Initializing Reinforcement Learning with Optimal Control Policies Applied to Autonomous Drone Racing [paper, video] by Stanford University.
- Learning Riemannian Manifolds for Geodesic Motion Skills [paper, video] by Bosch.
- Safe Occlusion-Aware Autonomous Driving via Game-Theoretic Active Perception [paper, video] by Princeton University.
- Moving sidewinding forward: optimizing contact patterns for limbless robots via geometric mechanics [paper, video] by GTech, CMU.
- Active Learning of Abstract Plan Feasibility [paper, video] by MIT.
My Mentors: Kaushik Jayaram, Aaron M. Johnson, Jeannette Bohg, C. J. Taylor, Nick Roy.
-
RSS 2020, Early Career Award Keynote + Q&A: Jeannette Bohg (Stanford University) [Video]
Robotic Grasping of Novel Objects [NeurIPS 2016] DB for Supervised Learning (SVM) etc to find a good grasping point per pixel. Prof Jeanette made this contribution [ Learning grasping points with shape context ] with feature engineering (edge features & shape context [orientation etc]). From 2D grasping points to 6D grasping pose. Current works [ Google Arm Farm, DexNet ]. Insights by Prof Jeannette - Open loop does not work, avoiding collision is constraining, 2D grasping points are not enough. So Continuos Feedback & Re-Planning is important, Exploit the environment, action representations matter. Real-time Perception meets Reactive Motion Generation, Probabilistic Articulated Real-Time Tracking for Robot Manipulation and Riemannian Motion Policies were important updates.
More works on Robots actually learning with contact constraints: Planar in-hand manipulation via motion cones, A novel type of compliant and underactuated robotic hand for dexterous grasping, An autonomous manipulation system based on force control and optimization etc. Q Learning (DoubleQ) in Outer loop and model free RL (A3C) is used in the Inner loop.
Output: Learning to Scaffold the Development of Robotic Manipulation Skills. Prof Jeannette also learned that grasping depends both upon the object and the fingers - UniGrasp: Learning a Unified Model to Grasp with Multifingered Robotic Hands. Input : Object Point Cloud and Hand Specification to compute contact points.
What's Next ? Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks, Concept2Robot: Learning Manipulation Concepts from Instructions and Human Demonstrations, Object-Centric Task and Motion Planning in Dynamic Environments, Self-Supervised Learning of State Estimation for Manipulating Deformable Linear Objects and Dynamic Multi-Robot Task Allocation under Uncertainty and Temporal Constraints.
Inspiring lines: You cannot learn everything by reading papers, you have to make mistakes and fail. Work on fixture optimization and virtual fixtures are to be done. -
RSS 2020, Early Career Award Keynote + Q&A: Luca Carlone (MIT) [Video]
Topic :: The Future of Robot Perception : Certifiable Algorithms and Real-time High-level Understanding. Luca is the Director of Spark Lab, MIT : Sensing Perception Autonomy and Robot Kinetics. Saprk Lab mostly works on Robust Perception, Localization and Mapping (Lidar-based SLAM & Certifiable Algorithms); High level scene understanding (Spatial AI) - Kimera: Metrics-Semantic SLAM [ 3D Dynamic Scene Graphs: Actionable Spatial Perception with Places, Objects, and Humans - 3D Scene Understanding RSS'20 ]. Spatial Perception : Using sensor data into an internal model that the robot can use. Luca's work include One Ring to Rule Them All: Certifiably Robust Geometric Perception with Outliers, Monitoring and Diagnosability of Perception Systems etc.
Key Takeaways from this talk : In order to get low failure rates (e.g < 1e-7) and performance gaurantees, we need to rethink current perception algorithms [Certifiable Perception Algorithms]. We need a theory of robust spatial perception: how to connect robust algorithms into a robust system?
Image-based object localization: perception issues. ISSUE 1: front-end (hand-crafted or deep learned) can fail in unexpected ways (not uncommon to have >90% outlier). ISSUE 2: back-end may fail if there are many outliers.
Why does the back-end fail? Back-end at the end of teh day is solving an optimization problem.
In Certifiable Algorithms, we have an input ( measurement yi ) → Optimization Algorithm → Output (estimate). Certifiable Algorithms are fast (i.e Polynomial Time) algorithms that solve outlier rejection to optimality in virtually all problem instances or detect failures in worst case problems. From RANSAC, Luca is trying to flatten the curve. The idea is to transform Robust Estimation (non-convex, hard) problem to Semidefinite (convex) problems solvable in polynomial time using Black-Rangarajan duality : On the Unification of Line Processes, Outlier Rejection, and Robust Statistics with Applications in Early Vision and Lasserre hierarchy of relaxations : Global Optimization with Polynomials and the Problem of Moments.
The rate of success with TEASER++: fast & certifiable 3D registration [github] [YouTube] is way too high. In Perfect Shape: Certifiably Optimal 3D Shape Reconstruction from 2D Landmarks is LUCA's another amazing work.
Robust perception requires high level 3D understanding and 2D segmentation such as MASK-RCNN fails. Solution: Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping and Kimera: from SLAM to Spatial Perception with 3D Dynamic Scene Graphs. Kimera [github] can output real-time 3D model of the environment.
3D Dynamic Scene Graph : 3D Dynamic Scene Graphs: Actionable Spatial Perception with Places, Objects, and Humans Transition from SLAM algorithms to a notion of spatial perception where we cn segment layers spatially of an environment. REFERENCES :
Certifiably Robust Perception Algorithms and Systems :- One Ring to Rule Them All: Certifiably Robust Geometric Perception with Outliers
- Monitoring and Diagnosability of Perception Systems
- TEASER: Fast and Certifiable Point Cloud Registration
- Graduated Non-Convexity for Robust Spatial Perception: From Non-Minimal Solvers to Global Outlier Rejection
- In Perfect Shape: Certifiably Optimal 3D Shape Reconstruction from 2D Landmarks
- A Polynomial-time Solution for Robust Registration with Extreme Outlier Rates
- A Quaternion-based Certifiably Optimal Solution to the Wahba Problem with Outliers
- Outlier-Robust Spatial Perception: Hardness, General-Purpose Algorithms, and Guarantees
- Modeling Perceptual Aliasing in SLAM via Discrete-Continuous Graphical Models
High-level Understanding - 3D Dynamic Scene Graphs and Kimera :
- One Ring to Rule Them All: Certifiably Robust Geometric Perception with Outliers
- RSS2020, Test of Time: Award Talk + Q&A + Panel Debate [ Video ]
From Square Root SAM to GTSAM: Factor Graphs in Robotics [website]
Skydio Drones : The autonomy stack has to support superior navigation,tracking, and motion planning at a very low power. Using sparse SLAM we build a world representation around us. Many of these are optimization problems which are well solved by Factor Graphs. Factor graphs can represent many robotics problems from tracking to optimal control to sophisticated 3D mapping. Factor Graph exposes opportunities for raw field because of the deep connection with sparse linear algebra - Ordering Heuristics, Nested Dissection, Sparsification, Pre-Integration, Iterative Solvers, Incremental Inference and the Bayes Tree. So it gives oppportunities to increase computational performance.
SAM to GTSAM :
Smoothing and Mapping (SAM) : [ Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing ]
Navigation and Mapping :
[ iSAM: Incremental Smoothing and Mapping ] is used for mapping aircraft carriers to underwater robotics. Pre-Integrating IMU measurements yields state of the art visual-inertial navigation.
Future :
Kimera from Luca's lab uses Factor Graphs. 'Dynamic Scene Graph' uses Factor graph as well. Factor Graph Applications : DeepFactors: Real-Time Probabilistic Dense Monocular SLAM, Hybrid Contact Preintegration for Visual-Inertial-Contact State Estimation Using Factor Graphs, Robust Legged Robot State Estimation Using Factor Graph Optimization, Motion Planning as Probabilistic Inference using Gaussian Processes and Factor Graphs, Batch and Incremental Kinodynamic Motion Planning using Dynamic Factor Graphs, A Nonparametric Belief Solution to the Bayes Tree, Bundle Adjustment on a Graph Processor.