GithubHelp home page GithubHelp logo

yuvalnirkin / hyperseg Goto Github PK

View Code? Open in Web Editor NEW
206.0 7.0 38.0 574 KB

HyperSeg - Official PyTorch Implementation

Home Page: https://nirkin.com/hyperseg

License: Creative Commons Zero v1.0 Universal

Python 100.00%
real-time semantic-segmentation segmentation

hyperseg's Introduction

HyperSeg - Official PyTorch Implementation

Teaser Example segmentations on the PASCAL VOC dataset.

This repository contains the source code for the real-time semantic segmentation method described in the paper:

HyperSeg: Patch-wise Hypernetwork for Real-time Semantic Segmentation
Conference on Computer Vision and Pattern Recognition (CVPR), 2021
Yuval Nirkin, Lior Wolf, Tal Hassner
Paper

Abstract: We present a novel, real-time, semantic segmentation network in which the encoder both encodes and generates the parameters (weights) of the decoder. Furthermore, to allow maximal adaptivity, the weights at each decoder block vary spatially. For this purpose, we design a new type of hypernetwork, composed of a nested U-Net for drawing higher level context features, a multi-headed weight generating module which generates the weights of each block in the decoder immediately before they are consumed, for efficient memory utilization, and a primary network that is composed of novel dynamic patch-wise convolutions. Despite the usage of less-conventional blocks, our architecture obtains real-time performance. In terms of the runtime vs. accuracy trade-off, we surpass state of the art (SotA) results on popular semantic segmentation benchmarks: PASCAL VOC 2012 (val. set) and real-time semantic segmentation on Cityscapes, and CamVid.

Installation

Install the following packages:

git clone https://github.com/YuvalNirkin/hyperseg
cd hyperseg
conda env create -f hyperseg_env.yml
conda activate hyperseg
pip install -e .    # Alternatively add the root directory of the repository to PYTHONPATH.

Next, download the models and datasets:

Models

Template Dataset Resolution mIoU (%) FPS Link
HyperSeg-L PASCAL VOC 512x512 80.6 (val) - download
HyperSeg-M CityScapes 1024x512 76.2 (val) 36.9 download
HyperSeg-S CityScapes 1536x768 78.2 (val) 16.1 download
HyperSeg-S CamVid 768x576 78.4 (test) 38.0 download
HyperSeg-L CamVid 1024x768 79.1 (test) 16.6 -

The models FPS was measured on an NVIDIA GeForce GTX 1080TI GPU.

Either download the models under <project root>/weights or adjust the model variable in the test configuration files.

Datasets

Dataset # Images Classes Resolution Link
PASCAL VOC 10,582 21 up to 500x500 auto downloaded
CityScapes 5,000 19 2048x1024 download
CamVid 701 12 960x720 download

Either download the datasets under <project root>/data or adjust the data_dir variable in the configuration files.

Training

To train the HyperSeg-M model on Cityscapes, set the exp_dir and data_dir paths in cityscapes_efficientnet_b1_hyperseg-m.py and run:

python configs/train/cityscapes_efficientnet_b1_hyperseg-m.py

Testing

Testing a model after training

For example testing the HyperSeg-M model on Cityscapes validation set:

python test.py 'checkpoints/cityscapes/cityscapes_efficientnet_b1_hyperseg-m' \
-td "hyperseg.datasets.cityscapes.CityscapesDataset('data/cityscapes',split='val',mode='fine')" \
-it "seg_transforms.LargerEdgeResize([512,1024])"

Testing a pretrained model

For example testing the PASCAL VOC HyperSeg-L model using the available test configuration:

python configs/test/vocsbd_efficientnet_b3_hyperseg-l.py

Citation

@inproceedings{nirkin2021hyperseg,
  title={{HyperSeg}: Patch-wise Hypernetwork for Real-time Semantic Segmentation},
  author={Nirkin, Yuval and Wolf, Lior and Hassner, Tal},
  booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month={June},
  year={2021}
}

hyperseg's People

Contributors

cg-yuval avatar yuvalnirkin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

hyperseg's Issues

Hi,please help me,

When I load the checkpoint "cityscapes_efficientnet_b1_hyperseg-s.pth",there will be KeyError: 'optimizer'.

Traceback (most recent call last):
File "D:/PyCharm Community Edition 2022.2.2/plugins/python-ce/helpers/pydev/pydevd.py", line 1496, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "D:\PyCharm Community Edition 2022.2.2\plugins\python-ce\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "E:\Pythonfiles\hyperseg-main\configs\train\cityscapes_efficientnet_b1_hyperseg-s.py", line 47, in
scheduler=scheduler, pretrained=pretrained, model=model, criterion=criterion, batch_scheduler=batch_scheduler)
File "E:\Pythonfiles\hyperseg-main\hyperseg\train.py", line 227, in main
optimizer.load_state_dict(checkpoint["optimizer"])
KeyError: 'optimizer'

thank you

Error in operation camvid_efficientnet_b1_hyperseg-s

Original Traceback (most recent call last):
File "D:\Anaconda3\envs\hyperseg\lib\site-packages\torch\utils\data_utils\worker.py", line 302, in worker_loop
data = fetcher.fetch(index)
File "D:\Anaconda3\envs\hyperseg\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\Anaconda3\envs\hyperseg\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\zhangpan\Desktop\论文集\尝试跑通-CVPR-2021-HyperSeg
Patch-wise Hypernetwork for Real-time Semantic Segmentation\hyperseg-main\hyperseg\datasets\camvid.py", line 110, in getitem
img, target = self.transforms(img, target)
File "C:\Users\zhangpan\Desktop\论文集\尝试跑通-CVPR-2021-HyperSeg_ Patch-wise Hypernetwork for Real-time Semantic Segmentation\hyperseg-main\hyperseg\datasets\seg_transforms.py", line 51, in call
input = list(t(*input))
File "C:\Users\zhangpan\Desktop\论文集\尝试跑通-CVPR-2021-HyperSeg_ Patch-wise Hypernetwork for Real-time Semantic Segmentation\hyperseg-main\hyperseg\datasets\seg_transforms.py", line 172, in call
img = larger_edge_resize(img, self.size, self.interpolation) ####################
File "C:\Users\zhangpan\Desktop\论文集\尝试跑通-CVPR-2021-HyperSeg_ Patch-wise Hypernetwork for Real-time Semantic Segmentation\hyperseg-main\hyperseg\datasets\seg_transforms.py", line 147, in larger_edge_resize
return img.resize(size[::-1], interpolation) ############################
File "D:\Anaconda3\envs\hyperseg\lib\site-packages\PIL\Image.py", line 2070, in resize
raise ValueError(ValueError: Unknown resampling filter (InterpolationMode.BICUBIC). Use Image.Resampling.NEAREST (0), Image.Resampling.LANCZOS (1), Image.Resampling.BILINEAR (2), Image.Resampling.BICUBIC (3), Image.Resampling.BOX (4) or Image.Resampling.HAMMING (5)

No module named 'hyperseg

keep getting this when running python configs/test/vocsbd_efficientnet_b3_hyperseg-l.py ModuleNotFoundError: No module named 'hyperseg why

CUDA out of memory

Thank you for your sharing.
Why do I break out CUDA out of memory every time in a fixed training stage

inference hflip

Why I closed the inference_hflip while the detector still would flip the detection results? Thanks for your replying.

python test.py

python test.py 'checkpoints/vocsbd\vocsbd_efficientnet_b3_hyperseg-l' -td "hyperseg.datasets.voc_sbd.VOCSBDDataset('data/vocsbd',split='val',mode='fine')" -it "seg_transforms.Lar
gerEdgeResize([512,1024])"
Traceback (most recent call last):
File "F:\22-lx\code\hyperseg-main\test.py", line 296, in
main(**vars(parser.parse_args()))
File "F:\22-lx\code\hyperseg-main\test.py", line 117, in main
assert os.path.isdir(exp_dir), f'exp_dir "{exp_dir}" must be a path to a directory'
AssertionError: exp_dir "'checkpoints/vocsbd\vocsbd_efficientnet_b3_hyperseg-l'" must be a path to a directory
can you help me?thankl you

Another bug about the script of train camvid

While I solve the bug of camvid dataset, I find another bug of the script of train camvid. When running the script of train camvid, it will occur "AttributeError: 'RandomRotation' object has no attribute 'resample' ". Tips: the script of test camvid can run. I don't know how to deal with it.

Question about freezing updates for model

Hello, thank you so much for releasing your code! I am currently training the model on a custom dataset, and I was wondering if it is possible to freeze the gradient updates for a select number of classes? I am currently zeroing out the gradients from the decoder level 4 BatchNorm3 layer because that is the only layer that has the size of the number of classes within my custom dataset, but when I attempt this, it appears that the weights of the classes that I try to freeze are still changing.

TypeError: an integer is required (got type tuple)

Hi, Thanks for your sharing. There is a error when I run the training code which is the sample.

TRAINING: Epoch: 1 / 360; LR: 1.0e-03; losses: [total: 3.1942 (3.1942); ] bench: [iou: 0.0106 (0.0106); ] : 0%| | 1/1000 [00:07<1:59:20, 7.17s/batches]

Traceback (most recent call last):
File "/home/tt/zyj_ws/hyperseg/configs/train/cityscapes_efficientnet_b1_hyperseg-m.py", line 43, in
main(exp_dir, train_dataset=train_dataset, val_dataset=val_dataset, train_img_transforms=train_img_transforms,
File "/home/tt/zyj_ws/hyperseg/train.py", line 248, in main
epoch_loss, epoch_iou = proces_epoch(train_loader, train=True)
File "/home/tt/zyj_ws/hyperseg/train.py", line 104, in proces_epoch
for i, (input, target) in enumerate(pbar):
File "/home/tt/anaconda3/envs/zyjenv/lib/python3.9/site-packages/tqdm/std.py", line 1185, in iter
for obj in iterable:
File "/home/tt/anaconda3/envs/zyjenv/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 517, in next
data = self._next_data()
File "/home/tt/anaconda3/envs/zyjenv/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1179, in _next_data
return self._process_data(data)
File "/home/tt/anaconda3/envs/zyjenv/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/home/tt/anaconda3/envs/zyjenv/lib/python3.9/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 1.
Original Traceback (most recent call last):
File "/home/tt/anaconda3/envs/zyjenv/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/tt/anaconda3/envs/zyjenv/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/tt/anaconda3/envs/zyjenv/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/tt/zyj_ws/hyperseg/datasets/cityscapes.py", line 220, in getitem
image, target = self.transforms(image, target)
File "/home/tt/zyj_ws/hyperseg/datasets/seg_transforms.py", line 78, in call
input = list(t(*input))
File "/home/tt/zyj_ws/hyperseg/datasets/seg_transforms.py", line 334, in call
lbl = F.pad(lbl, (int(self.size[1] - lbl.size[0]), 0), self.lbl_fill, self.padding_mode).copy()
File "/home/tt/anaconda3/envs/zyjenv/lib/python3.9/site-packages/torchvision/transforms/functional.py", line 426, in pad
return F_pil.pad(img, padding=padding, fill=fill, padding_mode=padding_mode)
File "/home/tt/anaconda3/envs/zyjenv/lib/python3.9/site-packages/torchvision/transforms/functional_pil.py", line 153, in pad
image = ImageOps.expand(img, border=padding, **opts)
File "/home/tt/anaconda3/envs/zyjenv/lib/python3.9/site-packages/PIL/ImageOps.py", line 403, in expand
draw.rectangle((0, 0, width - 1, height - 1), outline=color, width=border)
File "/home/tt/anaconda3/envs/zyjenv/lib/python3.9/site-packages/PIL/ImageDraw.py", line 259, in rectangle
self.draw.draw_rectangle(xy, ink, 0, width)
TypeError: an integer is required (got type tuple)

According to the information from the Internet, this may be a problem in the transformation. Can you give me some suggestion on how to do it?

hyperseg

How do I perform this step?: Add the parent directory of the repository to PYTHONPATH.

Have you tried your new method on face segmentation?

Have you compared your new method with your old one "Deep face segmentation in extremely hard conditions"? I'm looking for the latest open source method for face segmentation which can handle occlusions like hands.

No module named 'hyper_seg'

Because the training failed, I tried to download cityscapes_efficientnet_b1_hyperseg-m.pth to hyperseg/weights, and renamed it to model_best.pth,
I try
python test.py 'weights'
-td "hyper_seg.datasets.cityscapes.CityscapesDataset('data/cityscapes',split='val',mode='fine')"\ -it "seg_transforms.LargerEdgeResize([512,1024])"
--gpus 0
image
Looking forward to your reply!

Unknown resampling filter (InterpolationMode.BICUBIC)

(pytorch_hyperseg_master) F:\22-lx\code\hyperseg-main>python hyperseg/test.py checkpoints/vocsbd/vocsbd_efficientnet_b3_hyperseg-l -td "hyperseg.datasets.voc_sbd.VOCSBDDataset('data/vocsbd','data/vocsbd/VOCdevkit/VOC2012/val.txt')"
-it "seg_transforms.LargerEdgeResize([512,1024])"
=> using GPU devices: 0
=> Loading segmentation model: "model_best.pth"...
Loading pretrained weights for efficientnet-b3...
0%| | 0/25 [00:03<?, ?batches/s]
Traceback (most recent call last):
File "F:\22-lx\code\hyperseg-main\hyperseg\test.py", line 296, in
main(**vars(parser.parse_args()))
File "F:\22-lx\code\hyperseg-main\hyperseg\test.py", line 156, in main
for i, (input, target) in enumerate(tqdm(test_loader, unit='batches', file=sys.stdout)):
File "D:\Anaconda\envs\pytorch_hyperseg_master\lib\site-packages\tqdm\std.py", line 1195, in iter
for obj in iterable:
File "D:\Anaconda\envs\pytorch_hyperseg_master\lib\site-packages\torch\utils\data\dataloader.py", line 681, in next
data = self._next_data()
File "D:\Anaconda\envs\pytorch_hyperseg_master\lib\site-packages\torch\utils\data\dataloader.py", line 1376, in _next_data
return self._process_data(data)
File "D:\Anaconda\envs\pytorch_hyperseg_master\lib\site-packages\torch\utils\data\dataloader.py", line 1402, in _process_data
data.reraise()
File "D:\Anaconda\envs\pytorch_hyperseg_master\lib\site-packages\torch_utils.py", line 461, in reraise
raise exception
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "D:\Anaconda\envs\pytorch_hyperseg_master\lib\site-packages\torch\utils\data_utils\worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "D:\Anaconda\envs\pytorch_hyperseg_master\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\Anaconda\envs\pytorch_hyperseg_master\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "f:\22-lx\code\hyperseg-main\hyperseg\datasets\voc_sbd.py", line 97, in getitem
img, target = self.transforms(img, target)
File "f:\22-lx\code\hyperseg-main\hyperseg\datasets\seg_transforms.py", line 52, in call
input = list(t(*input))
File "f:\22-lx\code\hyperseg-main\hyperseg\datasets\seg_transforms.py", line 173, in call
img = larger_edge_resize(img, self.size, self.interpolation)
File "f:\22-lx\code\hyperseg-main\hyperseg\datasets\seg_transforms.py", line 148, in larger_edge_resize
return img.resize(size[::-1], interpolation)
File "D:\Anaconda\envs\pytorch_hyperseg_master\lib\site-packages\PIL\Image.py", line 2130, in resize
raise ValueError(msg)
ValueError: Unknown resampling filter (InterpolationMode.BICUBIC). Use Image.Resampling.NEAREST (0), Image.Resampling.LANCZOS (1), Image.Resampling.BILINEAR (2), Image.Resampling.BICUBIC (3), Image.Resampling.BOX (4) or Image.Resamp
ling.HAMMING (5)

About Test bug

While I use the test script of vocdataset, it will occur a bug----[Errno 2] No such file or directory: 'data/vocsbd/Vocdevkit/V0C2012/3PEGImages/2007_000033.jp9. How to deal with it?

signal_index is not getting updated

The variable signal_index is not getting updated inside the init_signal2weights function in hyperseg_v1_0.py. Hence, it is assigning singal_index = 0 for every meta block in the decoder.

Because of this inside the apply_signal2weights function in each meta block, we will not be able to use all the channels of the input signal channels. Essentially we are using [0:signal_channels] of the whole signal where signal_channels of each block is always less than input signal channels.

        # Inside apply_signal2weights function
        w = self.signal2weights(s[:, self.signal_index:self.signal_index + self.signal_channels])[:, :self.hyper_params]

So, is the signal_index supposed to be zero for every meta block?

Saving for C++ inference

First, great work !

I was trying to train in Python and save it for C++ inference.
The classic approach doesn't work:

annotation_script_module = torch.jit.script(model)
annotation_script_module.save("my_path")

Do you have any suggestion on how to do it?

ValueError: Unknown resampling filter (InterpolationMode.BICUBIC).

I have some problems when I train the model on my own data and I don't know if it is because that I make some mistakes when I load my data.
Here are the details:

Traceback (most recent call last):
File "E:\zjy\hyperseg-main\hyperseg\train.py", line 254, in main
epoch_loss, epoch_iou = proces_epoch(val_loader, train=False)
File "E:\zjy\hyperseg-main\hyperseg\train.py", line 104, in proces_epoch
for i, (input, target) in enumerate(pbar):
File "E:\anaconda3\envs\zhangjiaying\lib\site-packages\tqdm\std.py", line 1130, in iter
for obj in iterable:
File "E:\anaconda3\envs\zhangjiaying\lib\site-packages\torch\utils\data\dataloader.py", line 521, in next
data = self._next_data()
File "E:\anaconda3\envs\zhangjiaying\lib\site-packages\torch\utils\data\dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "E:\anaconda3\envs\zhangjiaying\lib\site-packages\torch\utils\data\dataloader.py", line 1229, in _process_data
data.reraise()
File "E:\anaconda3\envs\zhangjiaying\lib\site-packages\torch_utils.py", line 438, in reraise
raise exception
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "E:\anaconda3\envs\zhangjiaying\lib\site-packages\torch\utils\data_utils\worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "E:\anaconda3\envs\zhangjiaying\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "E:\anaconda3\envs\zhangjiaying\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "E:\zjy\hyperseg-main\hyperseg\datasets\Massachusetts.py", line 104, in getitem
img, target = self.transforms(img, target)
File "E:\zjy\hyperseg-main\hyperseg\datasets\seg_transforms.py", line 78, in call
input = list(t(*input))
File "E:\zjy\hyperseg-main\hyperseg\datasets\seg_transforms.py", line 199, in call
img = larger_edge_resize(img, self.size, self.interpolation)
File "E:\zjy\hyperseg-main\hyperseg\datasets\seg_transforms.py", line 174, in larger_edge_resize
return img.resize(size[::-1], interpolation)
File "E:\anaconda3\envs\zhangjiaying\lib\site-packages\PIL\Image.py", line 1861, in resize
raise ValueError(
ValueError: Unknown resampling filter (InterpolationMode.BICUBIC). Use Image.NEAREST (0), Image.LANCZOS (1), Image.BILINEAR (2), Image.BICUBIC (3), Image.BOX (4) or Image.HAMMING (5)

How to convert the pth file to onnx?

Did someone try to convert the pth file to onnx and could help me?
I tried, but I met this error: RuntimeError: Unsupported: ONNX export of convolution for kernel of unknown shape.

Further information
File "C:\ProgramData\Anaconda3\envs\project\lib\site-packages\torch\onnx\utils.py", line 729, in _export
dynamic_axes=dynamic_axes)
File "C:\ProgramData\Anaconda3\envs\project\lib\site-packages\torch\onnx\utils.py", line 501, in _model_to_graph
module=module)
File "C:\ProgramData\Anaconda3\envs\project\lib\site-packages\torch\onnx\utils.py", line 216, in _optimize_graph
graph = torch._C.jit_pass_onnx(graph, operator_export_type)
File "C:\ProgramData\Anaconda3\envs\project\lib\site-packages\torch\onnx_init.py", line 373, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\project\lib\site-packages\torch\onnx\utils.py", line 1032, in _run_symbolic_function
return symbolic_fn(g, *inputs, **attrs)
File "C:\ProgramData\Anaconda3\envs\project\lib\site-packages\torch\onnx\symbolic_helper.py", line 172, in wrapper
return fn(g, *args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\project\lib\site-packages\torch\onnx\symbolic_opset9.py", line 1281, in _convolution
raise RuntimeError("Unsupported: ONNX export of convolution for kernel "

Notes
Any additional information, code snippets:
from torch.autograd import Variable
dummy_input = Variable(torch.randn(7, 3, 512, 1024))
state_dict = torch.load('./model_best.pth')
model.load_state_dict(state_dict['state_dict'])
torch.onnx.export(model, dummy_input, "arch.onnx",opset_version=11)

no checkpoint found at 'checkpoints/cityscapes\cityscapes_efficientnet_b1_hyperseg-m'

C:\Users\Oikura.conda\envs\g37two\python.exe C:\Users\Oikura\Desktop\drive\code\hyperseg-main\configs\train\cityscapes_efficientnet_b1_hyperseg-m.py
100%|██████████| 2975/2975 [01:02<00:00, 47.65files/s]
100%|██████████| 500/500 [00:10<00:00, 47.87files/s]
C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\utils\data\dataloader.py:490: UserWarning: This DataLoader will create 16 worker processes in total. Our suggested max number of worker in current system is 8 (cpuset is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
cpuset_checked))
Loading pretrained weights for efficientnet-b1...
C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
=> no checkpoint found at 'checkpoints/cityscapes\cityscapes_efficientnet_b1_hyperseg-m'
0%| | 0/250 [00:00<?, ?batches/s]Traceback (most recent call last):
File "", line 1, in
0%| | 0/250 [00:04<?, ?batches/s]
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\spawn.py", line 105, in spawn_main
Traceback (most recent call last):
File "C:\Users\Oikura\Desktop\drive\code\hyperseg-main\configs\train\cityscapes_efficientnet_b1_hyperseg-m.py", line 48, in
scheduler=scheduler, pretrained=pretrained, model=model, criterion=criterion, batch_scheduler=batch_scheduler)
exitcode = _main(fd) File "C:\Users\Oikura\Desktop\drive\code\hyperseg-main\hyperseg\train.py", line 248, in main

File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\spawn.py", line 225, in prepare
epoch_loss, epoch_iou = proces_epoch(train_loader, train=True)
File "C:\Users\Oikura\Desktop\drive\code\hyperseg-main\hyperseg\train.py", line 104, in proces_epoch
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
for i, (input, target) in enumerate(pbar):
File "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\tqdm\std.py", line 1178, in iter
run_name="mp_main")
File "C:\Users\Oikura.conda\envs\g37two\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\Oikura.conda\envs\g37two\lib\runpy.py", line 96, in _run_module_code
for obj in iterable:
File "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\utils\data\dataloader.py", line 368, in iter
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\Oikura.conda\envs\g37two\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Oikura\Desktop\drive\code\hyperseg-main\configs\train\cityscapes_efficientnet_b1_hyperseg-m.py", line 4, in
return self._get_iterator()
File "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\utils\data\dataloader.py", line 314, in get_iterator
import torch.optim as optim
File "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch_init
.py", line 126, in
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\utils\data\dataloader.py", line 927, in init
raise err
OSError: [WinError 1455] 页面文件太小,无法完成操作。 Error loading "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\lib\cudnn_cnn_infer64_8.dll" or one of its dependencies.
w.start()
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe

进程已结束,退出代码1

why?

Dataset not found or incomplete.

(liuhaomag) cv428@428:~/data/LH/hyperseg$ python configs/train/cityscapes_efficientnet_b1_hyperseg-m.py
Traceback (most recent call last):
  File "configs/train/cityscapes_efficientnet_b1_hyperseg-m.py", line 46, in <module>
    scheduler=scheduler, pretrained=pretrained, model=model, criterion=criterion, batch_scheduler=batch_scheduler)
  File "/home/cv428/data/LH/hyperseg/train.py", line 187, in main
    train_dataset = obj_factory(train_dataset, transforms=train_transforms)
  File "/home/cv428/data/LH/hyperseg/utils/obj_factory.py", line 57, in obj_factory
    return obj_exp(*args, **kwargs)
  File "/home/cv428/data/LH/hyperseg/datasets/cityscapes.py", line 156, in __init__
    raise RuntimeError('Dataset not found or incomplete. Please make sure all required folders for the'
RuntimeError: Dataset not found or incomplete. Please make sure all required folders for the specified "split" and "mode" are inside the "root" directory
 

Inference

How could I do the inference on Cityscapes dataset? Can someone help me please?
Thank you!

Backbone

How could I change the backbone with something else, like MobileNet or ResNet? Can someone help me please?
Thank you!

Assertion `t >= 0 && t < n_classes` failed.

Sorry to bother you, I have encontered a probelm: Assertion t >= 0 && t < n_classes failed when I try to use your model to my datasets, I have changed the num_class in my datasets, but I still face the problem, can u help me? here is my change:
self.classes = list(range(2))
self.image_classes = calc_classes_per_image(self.masks, 2, cache_file)

MIoU on Pascal VOC is about 0.7 with 50 epoches

Thank you for your sharing.
I have tried to retrain the model on Pascal VOC dataset, but the miou is only about 0.7 after 50 epoches and it hardly increased, may I know is it correct? or should I continue the training for 160 epoches?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.