hujiecpp / yoso Goto Github PK
View Code? Open in Web Editor NEWCode release for paper "You Only Segment Once: Towards Real-Time Panoptic Segmentation" [CVPR 2023]
License: MIT License
Code release for paper "You Only Segment Once: Towards Real-Time Panoptic Segmentation" [CVPR 2023]
License: MIT License
Thank you for your hard work on YOSO. Could you please provide me with a copy of your training log file?
Best regards,
[Xiaolin]
Hi,
Thank you for sharing the code for YOSO. I followed the instructions from the Readme and installed PyTorch and other dependencies in that order. I am able to run the python setup develop
but when I get to training COCO with R50 with the following command, I am getting a runtime error saying "Not compiled with GPU Support".
python projects/YOSO/train_net.py --num-gpus 1 --config-file /home/jmadinn/perception/YOSO/projects/YOSO/configs/coco/panoptic-segmentation/YOSO-R50.yaml
I have added the complete traceback at the bottom. I can confirm that the torch.cuda.is_available()
returns True and CUDA version is 11.3 as in the instructions. Could you please confirm if this is an error you have faced before and how I can resolve it? I have already created a new environment twice but the issue persists.
Thanks in advance!
Traceback (most recent call last):
File "projects/YOSO/train_net.py", line 295, in
launch(
File "/home/jmadinn/perception/YOSO/detectron2/engine/launch.py", line 62, in launch
main_func(*args)
File "projects/YOSO/train_net.py", line 289, in main
return trainer.train()
File "/home/jmadinn/perception/YOSO/detectron2/engine/defaults.py", line 420, in train
super().train(self.start_iter, self.max_iter)
File "/home/jmadinn/perception/YOSO/detectron2/engine/train_loop.py", line 134, in train
self.run_step()
File "/home/jmadinn/perception/YOSO/detectron2/engine/defaults.py", line 430, in run_step
self._trainer.run_step()
File "/home/jmadinn/perception/YOSO/detectron2/engine/train_loop.py", line 228, in run_step
loss_dict = self.model(data)
File "/home/jmadinn/.conda/envs/YOSO/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jmadinn/perception/YOSO/projects/YOSO/yoso/segmentator.py", line 84, in forward
neck_feats = self.yoso_neck(features)
File "/home/jmadinn/.conda/envs/YOSO/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jmadinn/perception/YOSO/projects/YOSO/yoso/neck.py", line 186, in forward
features = self.deconv(features_list)
File "/home/jmadinn/.conda/envs/YOSO/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jmadinn/perception/YOSO/projects/YOSO/yoso/neck.py", line 126, in forward
x = self.deform_conv1(x5)
File "/home/jmadinn/.conda/envs/YOSO/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jmadinn/perception/YOSO/projects/YOSO/yoso/neck.py", line 60, in forward
out = self.dcn(out, offset, mask)
File "/home/jmadinn/.conda/envs/YOSO/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jmadinn/perception/YOSO/detectron2/layers/deform_conv.py", line 473, in forward
x = modulated_deform_conv(
File "/home/jmadinn/perception/YOSO/detectron2/layers/deform_conv.py", line 220, in forward
_C.modulated_deform_conv_forward(
RuntimeError: Not compiled with GPU support
I don't know how to train with my own dataset, specifically: how to arrange the dataset, which folder to place it in, and which files are needed?
I only know how to convert the dataset to Coco format.
If you could answer, I would greatly appreciate it.
How to Convert a Model to Onnx Lattice,yoso_res50_coco.pth
In the current form, the package is installed such that its subpackages are imported like this:
from demo import config
(which might very well clash with other local packages)
or unintuitive paths like this:
import projects.YOSO..yoso
whereas you would usually suspect only a import yoso
or something like this
Yosoerror1314
So firstly:
OSError: [WinError 1314] A required privilege is not held by the client: 'C:\Users\rohit\YOSO\configs' -> 'C:\Users\rohit\YOSO\detectron2\model_zoo\configs'
and in the handling of the error i get:
FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\Users\rohit\YOSO\configs'
now the second error i understand because configs exists inside detectron2\model_zoo, i dont understand the first.
How to resolve
When drawing the result of the panoptic segmentation (draw_panoptic_seg_predictions), then one of the subsequent calls within that function call -> YOSO/detectron2/utils/visualizer.py line 857 the color can numerically be > 1. For example in my case: (0.9999999999999998, 1.0000000000000002, 1.0000000000000002) which in turn leads matplotlib to crash.
the offered pretrained model in coco with 512,800 size seems to be identical to the 800,1330 size model.
How to Convert a Model to Onnx Lattice,yoso_res50_coco.pth
HI
Error while running setup.py
Traceback (most recent call last):
File "setup.py", line 156, in get_model_zoo_configs
os.symlink(source_configs_dir, destination)
FileExistsError: [Errno 17] File exists: '/YOSO/configs' -> '/YOSO/detectron2/model_zoo/configs'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "setup.py", line 173, in <module>
package_data={"detectron2.model_zoo": get_model_zoo_configs()},
File "setup.py", line 159, in get_model_zoo_configs
shutil.copytree(source_configs_dir, destination)
File "/opt/conda/envs/YOSO/lib/python3.8/shutil.py", line 555, in copytree
with os.scandir(src) as itr:
FileNotFoundError: [Errno 2] No such file or directory: '/YOSO/configs'
I try the seting but it is error
I want to see the inference speed using the TensorRT conversion model on the jetson board, but I want to check whether the conversion code is supported or not
Hi, thank you for your great work!
I downloaded your pre-train on COCO and CityScape.
It seems the checkpoint files of two versions on Cityscapes dataset 1024,2048 and 512,1024 are the same files.
Could you check the link and verify it?
Thank you
The video size is 1080 * 720, with fps=5 on 3090, which is far less than 23.6. What is the reason?
I‘ve already set the ignore_label in register
Thanks for your work!
The paper points that YOLO delivers competitive performance compared to state-of-the-art models.
In fact, currently, much better results may be achieved both for COCO and Cityscapes datasets. For COCO dataset the PQ = 59.5 is obtained by OpenSeeD (see: https://paperswithcode.com/sota/panoptic-segmentation-on-coco-minival) and PQ of about 70 may be achieved for the Cityscapes dataset (https://paperswithcode.com/sota/panoptic-segmentation-on-cityscapes-val).
The presented approach reaches only 46.4 and 52.5, respectively, far from the state-of-the-art solutions.
It seems that YOSO has sacrificed precision heavily in pursuit of real-time performance?
Please add a license file of some kind
If you want other people to use your work, an MIT-style license is strongly encouraged
Hi @hujiecpp,
The following error occurs when I evaluate YOSO for instance segmentation on Cityscapes dataset using your provided model. Can you verify this evaluation on your end?
I was able to replicate the panoptic and semantic results. Thank you for your great work!
你好,我python demo/demo.py --config-file projects/YOSO/configs/coco/panoptic-segmentation/YOSO-R50.yaml --video-input input_video.mp4 --output output_video.mp4 --opts MODEL.WEIGHTS ./model_zoo/yoso_res50_coco.pth 之后显示这个Segmentation fault (core dumped),也没有报错,也没有输出结果,请问知道是怎么回事么?
Hello, I encountered the following problems when testing the demo:
“RuntimeError: GET was unable to find an engine to execute this computation”
how to solve it?thank you!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.