ut-austin-rpl / ditto Goto Github PK
View Code? Open in Web Editor NEWCode for Ditto: Building Digital Twins of Articulated Objects from Interaction
License: MIT License
Code for Ditto: Building Digital Twins of Articulated Objects from Interaction
License: MIT License
When I run the real laptop demo notebook, I encounter this error. I'm not sure where this model coming from.
I want to train the Ditto using my own mesh files with urdf files as the train dataset. Is there any code for generating dataset?
Hi, I'd like to ask what data are contained in camera2base.json file? It looks like the pose of the camera, but what does the base coordinate system mean?
I have been replicating the visualization result shown in demo_depth_map.ipynb (https://github.com/UT-Austin-RPL/Ditto/blob/master/notebooks/demo_depth_map.ipynb). The following are the steps I executed:
The prismatic joint axis looks different from the demo and meshes of both digital twins from my result seem to be incomplete.
Hello , this error comes when I do the Training , Is there any solution
File "C:\Users\Labor\anaconda3\envs\Ditto\lib\runpy.py", line 234, in _get_code_from_file
with io.open_code(decoded_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Labor\Digital-Twin\Ditto\logs\runs\2022-05-19\Ditto_s2m-10-04-35\run.py'
Hi,
Thanks for your great work. When going through the demo code, I don't quite understand about the recenter process. Can you help exaplain a bit. Based on my current understanding, the pivot point has been the averaged motion origin. Why do we want another recenter operation? What's the use of the double cross?
if recenter: pivot_point = np.cross(axis, np.cross(pivot_point, axis))
Hi,
I notice that in the demo, the laptop is converted from depth image to camera coordinate, then to world coordinate. What's the reason to convert into the world coordinate? Does it promise that the object is in the canonical object space?
When training on the shape2motion data, is the training data in the canonical object space (with a canonical pose)? Or it's just in the camera coordinate
Hi, according to the tutorial, I have realized the visualization of test results of the real word dataset, which depends on the input of depth images and RGB images. However, how to visualize the test results of the shape2motion dataset? Thank you very much for your help!
The utils3d is missing while it still used in ntebook
Hi! I'm new to the way your code organizes and I wonder where the output of the model is. And how to visualize it using the utils3d tools?
Many thanks!
Hi, Can you share the process of data pre-processing? We think your work can bring some inspiration to our research. Thank you very much!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.