sega-hsj / mvt-3dvg Goto Github PK
View Code? Open in Web Editor NEWMulti-View Transformer for 3D Visual Grounding [CVPR 2022]
Multi-View Transformer for 3D Visual Grounding [CVPR 2022]
Hi,
Thanks for sharing the code! Can I know how large is the computation? Is is computation expensive to do 3D visual grounding?
Hi everyone,
I run the training code with command provided in README, the results are lower than that in the paper, as shown in the below table:
Accuracy | MVT | Reproduce |
---|---|---|
Nr3d | 55.1 | 51.52 |
Sr3D | 64.5 | 62.3 |
Therefore, I am wondering if someone else encounter such issue while reproducing the results.
Thanks in advcne!
Thanks for your excellent work!
I found the batch["lang_tokens'](created in referit_3d_net_utils.py single_epoch_train funtion) not partitioned into 2 parts in model when I used 2 gpus, which resulted in the mismatch of LANG_LOGITS computation(referit_3d_net.py forward function).
So I wonder what I need to revise if I want to use nn.DataParallel.
Looking forward to your reply and I'm sorry if it was my mistake. : )
Thanks for your wonderful work. : )
May I ask how I could directly visualize the prediction, i.e. bounding box of a certain object, of an input scene sample?
I've found some code concerning visualization in function 'detailed_predictions_on_dataset' (referit3d_net_utils.py, line 176), but after checking I think it may not realize what I want.
Thanks!
I find a visualization code in ScanRefer. Did you use this code to visualize your results?
Hi, I wonder if the SOTA result model's checkpoint will be released for evaluation purpose?
Best,
Zhening
Can you release detailed code for visualization with Open3D? I have obtained test_resultall_vis.pkl in logs/checkpoints. Thanks.
Thank you for the excellent work and code provided! The "readme" has indeed provided a clear and explicit explanation of the process to equip the ReferIt3d dataset. However, I don't find how to equip the code with ScanRef dataset. Am I missing something you have already mentioned?
Great work on pushing the performance of 3D-VG models to a new level!
In my perspective, it is unnatural to representing different view by only one point (center of object according to the paper). Thus, I am really curious about the motivation.
How about the generating different view by rotating all the points of an object and use these different views to conduct experiments? Can this work or not?
Looking forward to author's reply~
Hi,
I have a naive problem with these two benchmarks. They both build upon ScanNet. Should I download the entire dataset (1.3TB)? Or only a part of them are used?
My pytorch version is py3.8_cuda11.1_cudnn8.0.5_0 and I encounter the RuntimeError when compile CUDA layers for PointNet++.
Solution: I solve this by replacing all mentions of AT_CHECK with TORCH_CHECK in ./referit3d/external_tools/pointnet2/_ext_src/src.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.