GithubHelp home page GithubHelp logo

jakaria08 / eesrgan Goto Github PK

View Code? Open in Web Editor NEW
264.0 8.0 68.0 1.84 MB

Small-Object Detection in Remote Sensing (satellite) Images with End-to-End Edge-Enhanced GAN and Object Detector Network

License: GNU General Public License v3.0

Python 100.00%
esrgan detection-performance dataset object-detection super-resolution ssd frcnn edge-enhancement remote-sensing satellite-imagery

eesrgan's Introduction

EESRGAN

Model Architecture

Enhancement and Detection

Low Resolution
Image & Detection
Super Resolved
Image & Detection
High Resolution Ground Truth
Image & Bounding Box

Dependencies and Installation

  • Python 3 (Recommend to use Anaconda)
  • PyTorch >= 1.0
  • NVIDIA GPU + CUDA
  • Python packages: pip install -r path/to/requirement.txt

Training

python train.py -c config_GAN.json

Testing

python test.py -c config_GAN.json

Dataset

Download dataset from here. Here is a GitHub repo to create custom image patches. Download pre-made dataset from here and this script can be used with pre-made dataset to create high/low-resolution and bicubic images. Make sure to copy annotation files (.txt) in the HR, LR and Bic folder.

Edit the JSON File

The directory of the following JSON file is needed to be changed according to the user directory. For details see config_GAN.json and pretrained weights are uploaded in google drive

{
    "data_loader": {
        "type": "COWCGANFrcnnDataLoader",
        "args":{
            "data_dir_GT": "/Directory for High-Resolution Ground Truth images/",
            "data_dir_LQ": "/Directory for 4x downsampled Low-Resolution images from the above High-Resolution images/"
        }
    },

    "path": {
        "models": "saved/save_your_model_in_this_directory/",
        "pretrain_model_G": "Pretrained_model_path_for_train_test/170000_G.pth",
        "pretrain_model_D": "Pretrained_model_path_for_train_test/170000_G.pth",
        "pretrain_model_FRCNN": "Pretrained_model_path_for_train_test/170000_G.pth",
        "data_dir_Valid": "/Low_resoluton_test_validation_image_directory/"
        "Test_Result_SR": "Directory_to_store_test_results/"
    }
}

Paper

Find the published version on Remote Sensing.
Find the preprints of the related paper on preprints.org, arxiv.org and researchgate.net.

Abstract

The detection performance of small objects in remote sensing images has not been satisfactory compared to large objects, especially in low-resolution and noisy images. A generative adversarial network (GAN)-based model called enhanced super-resolution GAN (ESRGAN) showed remarkable image enhancement performance, but reconstructed images usually miss high-frequency edge information. Therefore, object detection performance showed degradation for small objects on recovered noisy and low-resolution remote sensing images. Inspired by the success of edge enhanced GAN (EEGAN) and ESRGAN, we applied a new edge-enhanced super-resolution GAN (EESRGAN) to improve the quality of remote sensing images and used different detector networks in an end-to-end manner where detector loss was backpropagated into the EESRGAN to improve the detection performance. We proposed an architecture with three components: ESRGAN, EEN, and Detection network. We used residual-in-residual dense blocks (RRDB) for both the ESRGAN and EEN, and for the detector network, we used a faster region-based convolutional network (FRCNN) (two-stage detector) and a single-shot multibox detector (SSD) (one stage detector). Extensive experiments on a public (car overhead with context) dataset and another self-assembled (oil and gas storage tank) satellite dataset showed superior performance of our method compared to the standalone state-of-the-art object detectors.

Keywords

object detection; faster region-based convolutional neural network (FRCNN); single-shot multibox detector (SSD); super-resolution; remote sensing imagery; edge enhancement; satellites

Related Repository

Some code segments are based on ESRGAN

Citation

BibTex

@article{rabbi2020small,
title={Small-Object Detection in Remote Sensing Images with End-to-End Edge-Enhanced GAN and Object Detector Network},
author={Rabbi, Jakaria and Ray, Nilanjan and Schubert, Matthias and Chowdhury, Subir and Chao, Dennis},
journal={Remote Sensing},
volume={12},
number={9},
pages={1432},
year={2020}
publisher={Multidisciplinary Digital Publishing Institute}
}

Chicago

Rabbi, Jakaria; Ray, Nilanjan; Schubert, Matthias; Chowdhury, Subir; Chao, Dennis. 2020. "Small-Object Detection in Remote Sensing Images with End-to-End Edge-Enhanced GAN and Object Detector Network." Remote Sens. 12, no. 9: 1432.

To Do

  • Refactor and clean the code.
  • Add more command line option for training and testing to run different configuration.
  • Fix bug and write important tests.

eesrgan's People

Contributors

jakaria08 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eesrgan's Issues

torch.jit.frontend.UnsupportedNodeError: JoinedStr aren't supported

I tried to run this program under the environment and torch specified by the author, but this error occurred. Could you please help me check it? Thank you very much

Traceback (most recent call last):
File "train.py", line 10, in
import model.model as module_arch
File "/home/cas/桌面/jack/EESRGAN-master/model/model.py", line 7, in
import kornia
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/kornia/init.py", line 13, in
from kornia import augmentation, color, contrib, enhance, feature, filters, geometry, jit, losses, morphology, utils
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/kornia/jit/init.py", line 7, in
grayscale_to_rgb = torch.jit.script(K.color.grayscale_to_rgb)
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/torch/jit/init.py", line 823, in script
ast = get_jit_def(obj)
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/torch/jit/frontend.py", line 158, in get_jit_def
return build_def(ctx, py_ast.body[0], type_line, self_name)
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/torch/jit/frontend.py", line 198, in build_def
build_stmts(ctx, body))
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/torch/jit/frontend.py", line 122, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/torch/jit/frontend.py", line 122, in
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/torch/jit/frontend.py", line 174, in call
return method(ctx, node)
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/torch/jit/frontend.py", line 332, in build_If
build_stmts(ctx, stmt.body),
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/torch/jit/frontend.py", line 122, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/torch/jit/frontend.py", line 122, in
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/torch/jit/frontend.py", line 174, in call
return method(ctx, node)
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/torch/jit/frontend.py", line 288, in build_Raise
expr = build_expr(ctx, stmt.exc)
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/torch/jit/frontend.py", line 174, in call
return method(ctx, node)
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/torch/jit/frontend.py", line 405, in build_Call
args = [build_expr(ctx, py_arg) for py_arg in expr.args]
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/torch/jit/frontend.py", line 405, in
args = [build_expr(ctx, py_arg) for py_arg in expr.args]
File "/home/cas/anaconda3/envs/deep-learning/lib/python3.7/site-packages/torch/jit/frontend.py", line 173, in call
raise UnsupportedNodeError(ctx, node)
torch.jit.frontend.UnsupportedNodeError: JoinedStr aren't supported
image: grayscale image to be converted to RGB with shape :math:(*,1,H,W).
Returns:
RGB version of the image with shape :math:(*,3,H,W).

Example:
    >>> input = torch.randn(2, 1, 4, 5)
    >>> gray = grayscale_to_rgb(input) # 2x3x4x5
"""
if not isinstance(image, torch.Tensor):
    raise TypeError(f"Input type is not a torch.Tensor. " f"Got {type(image)}")
                    ~ <--- HERE
if image.dim() < 3 or image.size(-3) != 1:
    raise ValueError(f"Input size must have a shape of (*, 1, H, W). " f"Got {image.shape}.")
rgb: torch.Tensor = torch.cat([image, image, image], dim=-3)
image_is_float: bool = torch.is_floating_point(image)
if not image_is_float:
    warnings.warn(f"Input image is not of float dtype. Got {image.dtype}")
return rgb

python train.py -c config_GAN.json

when i run python train.py -c config_GAN.json
I meet a problem:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [3, 64, 3, 3]] is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

I surely changed all the Relu(inplace = False), but it doesnot work

ValueError: num_samples should be a positive integer value, but got num_samples=0

I download the file DetectionPatches_256x256,and then generate the HR / LR directory using file scripts_GAN_HR-LR.py
and the HR / LR directory are generated as expected,
however when I run the python train.py -c config_GAN.json command after modify the config_GAN.json"data_dir_GT": "/media/cao/WINDOWS/Users/123/Downloads/DetectionPatches_256x256/Potsdam_ISPRS/HR/x4/",
"data_dir_LQ": "/media/cao/WINDOWS/Users/123/Downloads/DetectionPatches_256x256/Potsdam_ISPRS/LR/x4/",
"FRCNN_model": "/media/cao/WINDOWS/Users/123/Downloads/EESRGAN-master/saved/FRCNN_model_LR_LR_cowc/",
"pretrain_model_G": "/media/cao/WINDOWS/Users/123/Downloads/EESRGAN-master/saved/pretrained_models_EESRGAN_FRCNN/170000_G.pth",
"pretrain_model_D": "/media/cao/WINDOWS/Users/123/Downloads/EESRGAN-master/saved/pretrained_models_EESRGAN_FRCNN/170000_D.pth",
"pretrain_model_FRCNN": "/media/cao/WINDOWS/Users/123/Downloads/EESRGAN-master/saved/pretrained_models_EESRGAN_FRCNN/170000_FRCNN.pth",
"pretrain_model_FRCNN_LR_LR": "/media/cao/WINDOWS/Users/123/Downloads/EESRGAN-master/saved/FRCNN_model_LR_LR_cowc/0_FRCNN_LR_LR.pth",

I still got the error :ValueError: num_samples should be a positive integer value, but got num_samples=0

Is there somewhere I forget to do or something is wrong?
my Chinese English(Chinalish) is poor, thanks you very much

about frcnn loss

hey its a nice work , here i have a quesion
in your essays the total loss added frcnn loss , but in this code ,i did not see
image

`
if step % self.D_update_ratio == 0 and step > self.D_init_iters:
if self.cri_pix: #pixel loss
l_g_pix = self.l_pix_w * self.cri_pix(self.fake_H, self.var_H)
l_g_total += l_g_pix
if self.cri_fea: # feature loss
real_fea = self.netF(self.var_H).detach() #don't want to backpropagate this, need proper explanation
fake_fea = self.netF(self.fake_H) #In netF normalize=False, check it
l_g_fea = self.l_fea_w * self.cri_fea(fake_fea, real_fea)
l_g_total += l_g_fea

        pred_g_fake = self.netD(self.fake_H)
        if self.configT['gan_type'] == 'gan':
            l_g_gan = self.l_gan_w * self.cri_gan(pred_g_fake, True)
        elif self.configT['gan_type'] == 'ragan':
            pred_d_real = self.netD(self.var_ref).detach()
            l_g_gan = self.l_gan_w * (
            self.cri_gan(pred_d_real - torch.mean(pred_g_fake), False) +
            self.cri_gan(pred_g_fake - torch.mean(pred_d_real), True)) / 2
        l_g_total += l_g_gan
        #EESN calculate loss
        self.lap_HR = kornia.laplacian(self.var_H, 3)
        if self.cri_charbonnier: # charbonnier pixel loss HR and SR
            l_e_charbonnier = 5 * (self.cri_charbonnier(self.final_SR, self.var_H)
                                    + self.cri_charbonnier(self.x_learned_lap_fake, self.lap_HR))#change the weight to empirically
        l_g_total += l_e_charbonnier
        #### did not see the frcnn loss
        l_g_total.backward(retain_graph=True)
        self.optimizer_G.step()

`

The End-to-End in code

I carefully read your code and wanted to find the end-to-end learning process of “GAN and Det”, but I didn't find it. I think esrgan_ EESN_ FRCNN_ Model.py only implements segmentation training。

...
l_g_total.backward(retain_graph=True)  
self.optimizer_G.step()  
...
l_d_total.backward()
self.optimizer_D.step()
...
losses.backward()
self.optimizer_FRCNN.step()

I don't think this part realizes that the loss of detector and discriminator is back propagation to GAN
Can you release the code for end-to-end training?

Train, Validation, Test Split

How did you split the train, validation, test sets? I read the other issues and a few people asked the same question. I'd like to reproduce the results. Could you share the code for the separation or inform us about how you did that?

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

I am getting the following error in the code on running the following command.

$ python train.py -c config_GAN.json

Traceback (most recent call last):
File "train.py", line 101, in
main(config)
File "train.py", line 80, in main
trainer.train()
File "/home/jovyan/EESRGAN/trainer/cowc_GAN_FRCNN_trainer.py", line 88, in train
self.model.optimize_parameters(current_step)
File "/home/jovyan/EESRGAN/model/ESRGAN_EESN_FRCNN_Model.py", line 243, in optimize_parameters
losses.backward()
File "/opt/conda/lib/python3.7/site-packages/torch/tensor.py", line 198, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/opt/conda/lib/python3.7/site-packages/torch/autograd/init.py", line 100, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [3, 64, 3, 3]] is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

I have torch version 1.6.0 and torchvision version 0.6

Test Problem

Hi,thanks for your work!I have a question as follow:
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Detection started........
1
[[1, 0.9999994, 171, 190, 203, 222]]
1
libpng warning: Image width is zero in IHDR
libpng warning: Image height is zero in IHDR
libpng error: Invalid IHDR data
successfully generated the results!
When i modify the dest_image_path = os.path.join(config['path']['Test_Result_SR'], file_name+'.png') to
West_image_path = os.path.join(config['path']['Test_Result_SR'], file_name+'.jpg'),the question seem to be solve as follow:

Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Detection started........
1
[[1, 0.9999994, 171, 190, 203, 222]]
1
successfully generated the results!
But there contains no informations for images in both two situations,could you know how to address it?

TypeError: object of type <class 'numpy.float64'> cannot be safely interpreted as an integer.

Hi,

I am trying to run the test script using the following command:

python test.py -c config_GAN.json

I get the below error, could you please help me out?
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/numpy/core/function_base.py", line 117, in linspace
num = operator.index(num)
TypeError: 'numpy.float64' object cannot be interpreted as an integer

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "test.py", line 96, in
main(config)
File "test.py", line 19, in main
tester.test()
File "/home/jovyan/EESRGAN/trainer/cowc_GAN_FRCNN_trainer.py", line 38, in test
self.model.test(self.data_loader, train=False, testResult=True)
File "/home/jovyan/EESRGAN/model/ESRGAN_EESN_FRCNN_Model.py", line 272, in test
evaluate(self.netG, self.netFRCNN, self.targets, self.device)
File "/opt/conda/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/home/jovyan/EESRGAN/detection/engine.py", line 142, in evaluate
coco_evaluator = CocoEvaluator(coco, iou_types)
File "/home/jovyan/EESRGAN/detection/coco_eval.py", line 28, in init
self.coco_eval[iou_type] = COCOeval(coco_gt, iouType=iou_type)
File "/opt/conda/lib/python3.7/site-packages/pycocotools/cocoeval.py", line 76, in init
self.params = Params(iouType=iouType) # parameters
File "/opt/conda/lib/python3.7/site-packages/pycocotools/cocoeval.py", line 527, in init
self.setDetParams()
File "/opt/conda/lib/python3.7/site-packages/pycocotools/cocoeval.py", line 507, in setDetParams
self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True)
File "<array_function internals>", line 6, in linspace
File "/opt/conda/lib/python3.7/site-packages/numpy/core/function_base.py", line 121, in linspace
.format(type(num)))
TypeError: object of type <class 'numpy.float64'> cannot be safely interpreted as an integer.

Can we train to only get Super-Res images?

Hi

I'm trying to make this work on my custom dataset, I only have jpegs but no annotation data and am only interested getting the SR images as output. Will the code work without annotations, where all can I modify the code?

datasets problem

i do not know "data_dir_Valid": "/home/gfzx/yjg/Super_Resolution/Datasets/COWC/DetectionPatches_256x256/Potsdam_ISPRS/LR/x4/valid_img/"/"
the "vaild_img" how to get?

RuntimeError: A view was created in no_grad mode and is being modified inplace with grad mode enabled. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one

Hello @Jakaria08 I'm getting this error. Can you help me solve this ?

21-05-02 10:35:50.553 - INFO: Total epochs needed: 736 for iters 400,000
2021-05-02 10:35:50.717038: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
21-05-02 10:35:51.840 - INFO: Start training from epoch: 0, iter: 0
Traceback (most recent call last):
File "/content/drive/MyDrive/Project/EESRGAN/train.py", line 94, in
main(config)
File "/content/drive/MyDrive/Project/EESRGAN/train.py", line 73, in main
trainer.train()
File "/content/drive/MyDrive/Project/EESRGAN/trainer/cowc_GAN_FRCNN_trainer.py", line 83, in train
self.model.optimize_parameters(current_step)
File "/content/drive/MyDrive/Project/EESRGAN/model/ESRGAN_EESN_FRCNN_Model.py", line 234, in optimize_parameters
loss_dict = self.netFRCNN(self.intermediate_img, self.targets)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torchvision/models/detection/generalized_rcnn.py", line 78, in forward
images, targets = self.transform(images, targets)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torchvision/models/detection/transform.py", line 110, in forward
images = self.batch_images(images)
File "/usr/local/lib/python3.7/dist-packages/torchvision/models/detection/transform.py", line 215, in batch_images
pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy
(img)

Above is the error i'm getting. Can you help me ?

config_GAN.json ?

hi , when i use python test.py -c config_GAN.json to test

  • error happens

  • EESRGAN/scripts_for_datasets/COWC_EESRGAN_FRCNN_dataset.py", line 32, in getitem
    img_path_gt = os.path.join(self.data_dir_gt, self.imgs_gt[idx])

  • IndexError: list index out of range

  • It is the config_GAN.jon problem?

RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

I am getting the following problem while training in colab:

Traceback (most recent call last):
File "train.py", line 94, in
main(config)
File "train.py", line 73, in main
trainer.train()
File "/content/EESRGAN/trainer/cowc_GAN_FRCNN_trainer.py", line 83, in train
self.model.optimize_parameters(current_step)
File "/content/EESRGAN/model/ESRGAN_EESN_FRCNN_Model.py", line 196, in optimize_parameters
l_g_total.backward(retain_graph=True)
File "/usr/local/lib/python3.7/dist-packages/torch/tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/usr/local/lib/python3.7/dist-packages/torch/autograd/init.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

Please help!

cowc

Can you upload the COWC dataset for the experiment

IndexError: list index out of range

When I ran train.py, I ran into the same problem. According to what you said, the path of the dataset was confirmed to be no problem. But the same problem still occurs. So I want to know if there is a good solution.
Traceback (most recent call last):
File "train.py", line 103, in
main(config)
File "train.py", line 82, in main
trainer.train()
File "G:\new co\EESRGAN-master\trainer\cowc_GAN_FRCNN_trainer.py", line 75, in train
for _, (image, targets) in enumerate(self.data_loader):
File "D:\anaconda\envs\TensorFlow113\lib\site-packages\torch\utils\data\dataloader.py", line 560, in next
batch = self.collate_fn([self.dataset[i] for i in indices])
File "D:\anaconda\envs\TensorFlow113\lib\site-packages\torch\utils\data\dataloader.py", line 560, in
batch = self.collate_fn([self.dataset[i] for i in indices])
File "G:\new co\EESRGAN-master\scripts_for_datasets\COWC_EESRGAN_FRCNN_dataset.py", line 33, in getitem
annotation_path = os.path.join(self.data_dir_gt, self.annotation[idx])
IndexError: list index out of range

ImportError: attempted relative import with no known parent package

When executing the train file, I encounter the following error. Please help me.

Traceback (most recent call last):
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "", line 940, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\Users\Asus\PycharmProjects\pythonProject3\EESRGAN-master (1)\EESRGAN-master\detection\engine.py", line 9, in
from .coco_utils import get_coco_api_from_dataset, get_coco_api_from_dataset_base
ImportError: attempted relative import with no known parent package

IndexError: list index out of range

  • python3 test.py -c config_GAN.json

  • 480
    [250000, 500000, 750000]
    Traceback (most recent call last):
    File "test.py", line 96, in
    main(config)
    File "test.py", line 19, in main
    tester.test()
    File "/home/mjz/2D_Detcetion/EESRGAN/trainer/cowc_GAN_FRCNN_trainer.py", line 38, in test
    self.model.test(self.data_loader, train=False, testResult=True)
    File "/home/mjz/2D_Detcetion/EESRGAN/model/ESRGAN_EESN_FRCNN_Model.py", line 273, in test
    evaluate(self.netG, self.netFRCNN, self.targets, self.device)
    File "/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad
    return func(*args, **kwargs)
    File "/home/mjz/2D_Detcetion/EESRGAN/detection/engine.py", line 140, in evaluate
    coco = get_coco_api_from_dataset(data_loader.dataset)
    File "/home/mjz/2D_Detcetion/EESRGAN/detection/coco_utils.py", line 255, in get_coco_api_from_dataset
    return convert_to_coco_api(dataset)
    File "/home/mjz/2D_Detcetion/EESRGAN/detection/coco_utils.py", line 154, in convert_to_coco_api
    img, targets = ds[img_idx]
    File "/home/mjz/2D_Detcetion/EESRGAN/scripts_for_datasets/COWC_EESRGAN_FRCNN_dataset.py", line 32, in getitem
    img_path_gt = os.path.join(self.data_dir_gt, self.imgs_gt[idx])
    IndexError: list index out of range

about the datasets

I tested your pretrained model. I got high score on Potsdam, but low score on Toronto.
How dit you select the dataset
Did you just train on Potsdam?

Potsdam
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.982
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.990
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.990
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.982
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.985
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.229
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.937
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.993
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.993
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.994
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

Toronto
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.235
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.388
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.261
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.246
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.256
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.106
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.270
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.275
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.275
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.280
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

CUDE out of memory on custom data

I'm trying to train on 256x256 tiles, both train and test are 256x256. Also, I have a 16GB GPU still it runs out of memory

2020-10-19 17:45:53.661774: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
20-10-19 17:45:54.452 - INFO: Start training from epoch: 0, iter: 0
Traceback (most recent call last):
  File "train.py", line 100, in <module>
    main(config)
  File "train.py", line 79, in main
    trainer.train()
  File "/home/jovyan/EESRGAN/trainer/cowc_GAN_FRCNN_trainer.py", line 88, in train
    self.model.optimize_parameters(current_step)
  File "/home/jovyan/EESRGAN/model/ESRGAN_EESN_FRCNN_Model.py", line 167, in optimize_parameters
    self.fake_H, self.final_SR, self.x_learned_lap_fake, _ = self.netG(self.var_L)
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/jovyan/EESRGAN/model/model.py", line 569, in forward
    x_base = self.netRG(x) # add bicubic according to the implementation by author but not stated in the paper
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/jovyan/EESRGAN/model/model.py", line 334, in forward
    fea = self.lrelu(self.upconv2(F.interpolate(fea, scale_factor=2, mode='nearest')))
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 345, in forward
    return self.conv2d_forward(input, self.weight)
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
    self.padding, self.dilation, self.groups)
RuntimeError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 14.73 GiB total capacity; 13.73 GiB already allocated; 135.88 MiB free; 13.75 GiB reserved in total by PyTorch)

Question about test

Hello,i want to know if in the test progress,the input images seem to generated without passing EEN
model,and i get the following results:
Uploading Potsdam_2_10_RGB.0.11.png…

HR/X4 LR/X4

For the data set aspect in this paper, I want to consult you a question. I have seen your code. The data input is LR and HR . LR is generated by sampling under the COWC data set. HR is the original image of CoWC. When testing, the input is generated by sampling under the COWC data set. Shouldn't the image we tested be the original CoWC image? If I want to use other data sets, how to set the training input? The test input should be the original test set map of the data set? Isn't the original intention of our algorithm small object dataset?
This question bothers me very much. I look forward to your reply,Thank you

How to train on custom data

Hi,
Can u please guide me on how to train on custom data. It would be helpful if you attach a link or a document.
thank you.

Got an error when running your code

Hello,
I got an error when running the train.py:
Traceback (most recent call last):
File "train.py", line 94, in
main(config)
File "train.py", line 32, in main
screen=True, tofile=True)
File "C:\EESRGAN\utils\util.py", line 328, in setup_logger
fh = logging.FileHandler(log_file, mode='w')
File "C:\MIMIA\Anaconda3\envs\tensorflow\lib\logging_init_.py", line 1087, in init
StreamHandler.init(self, self.open())
File "C:\MIMIA\Anaconda3\envs\tensorflow\lib\logging_init
.py", line 1116, in _open
return open(self.baseFilename, self.mode, encoding=self.encoding)
FileNotFoundError: [Errno 2] No such file or directory: 'C:\EESRGAN\saved\logs\train_RRDB_ESRGANx4_200730-145122.log

Would you please tell me what is going wrong? Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.