GithubHelp home page GithubHelp logo

vinthony / deep-blind-watermark-removal Goto Github PK

View Code? Open in Web Editor NEW
218.0 6.0 55.0 60 KB

[AAAI 2021] Split then Refine: Stacked Attention-guided ResUNets for Blind Single Image Visible Watermark Removal

Home Page: https://arxiv.org/abs/2012.07007

Python 96.96% Jupyter Notebook 3.04%
watermark-removal pytorch aaai2021

deep-blind-watermark-removal's Introduction

This repo contains the code and results of the AAAI 2021 paper:

Split then Refine: Stacked Attention-guided ResUNets for Blind Single Image Visible Watermark Removal
Xiaodong Cun, Chi-Man Pun*
University of Macau

Datasets | Models | Paper | 🔥Online Demo!(Google CoLab)


nn

The overview of the proposed two-stage framework. Firstly, we propose a multi-task network, SplitNet, for watermark detection, removal, and recovery. Then, we propose the RefineNet to smooth the learned region with the predicted mask and the recovered background from the previous stage. As a consequence, our network can be trained in an end-to-end fashion without any manual intervention. Note that, for clarity, we do not show any skip-connections between all the encoders and decoders.


The whole project will be released in the January of 2021 (almost).

Datasets

We synthesized four different datasets for training and testing, you can download the dataset via huggingface.

image

Pre-trained Models

Other Pre-trained Models are still reorganizing and uploading, it will be released soon.

Demos

An easy-to-use online demo can be founded in google colab.

The local demo will be released soon.

Pre-requirements

pip install -r requirements.txt

Train

Besides training our methods, here, we also give an example of how to train the s2am under our framework. More details can be found in the shell scripts.

bash examples/evaluation.sh

Test

bash examples/test.sh

Acknowledgements

The author would like to thanks Nan Chen for her helpful discussion.

Part of the code is based upon our previous work on image harmonization s2am

Citation

If you find our work useful in your research, please consider citing:

@misc{cun2020split,
      title={Split then Refine: Stacked Attention-guided ResUNets for Blind Single Image Visible Watermark Removal}, 
      author={Xiaodong Cun and Chi-Man Pun},
      year={2020},
      eprint={2012.07007},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Contact

Please contact me if there is any question (Xiaodong Cun [email protected])

deep-blind-watermark-removal's People

Contributors

vinthony avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

deep-blind-watermark-removal's Issues

Can I get some evaluating metric or something else?

Excuse me , can I know how to evaluate the result of removing watermarks?

Do you have some metric like mAP or IoU? I just wonder how to evaluate the results of removing watermarks.

Can I get some evaluating metric or something else?

Thx!

Corrupted images in dataset

Thanks for your excellent work! However, I found some corrupted images in the dataset. In 10kgray, when reading val_images/image/COCO_val2014_000000087481-Kaporal_Jeans_Logo-185.png, train_images/image/COCO_val2014_000000260525-Breyers_Logo-163.png, train_images/image/COCO_val2014_000000510651-University_Of_Oxford_Logo_Text-179.png using imageio.v2.imread, an error image file is truncated occurs. In fact the latter two images do not contain the watermark and are thus not usable.

The three truncated images.

COCO_val2014_000000087481-Kaporal_Jeans_Logo-185
COCO_val2014_000000260525-Breyers_Logo-163
COCO_val2014_000000510651-University_Of_Oxford_Logo_Text-179

Modifying COCO dataset class to include 256x256 Random Crop gives CUDA error

Hi, I was trying to train the model on the Logo Dataset that you had provided, without resizing the Images.
Instead of Hard resize to 256x256 which distorts the aspect ratio, I decided to use random cropping however after a few epochs I'd run into CUDA error

Here is the error:

/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [0,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [1,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [2,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [3,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [4,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [5,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [6,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [7,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [8,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [9,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [10,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [11,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [12,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [13,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [14,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [15,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [16,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [17,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [18,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [19,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [20,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [21,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [22,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [23,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [24,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [25,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [26,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [27,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [28,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [29,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [30,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [31,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [32,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [33,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [34,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [35,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [36,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [37,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [38,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [39,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [40,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [41,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [42,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [43,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [44,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [45,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [46,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [47,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [48,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [49,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [50,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [51,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [52,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [53,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [54,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [55,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [56,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [57,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [58,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [59,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [60,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [61,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [62,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [63,0,0] Assertion input_val >= zero && input_val <= one failed.
Traceback (most recent call last):
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/deep-blind-watermark-removal_patch/main.py", line 71, in
main(args)
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/deep-blind-watermark-removal_patch/main.py", line 41, in main
Machine.train(epoch)
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/deep-blind-watermark-removal_patch/scripts/machines/VX.py", line 129, in train
l2_loss,att_loss,wm_loss,style_loss,ssim_loss = self.loss(outputs[0],self.norm(target),outputs[1],mask,outputs[2],self.norm(wm))
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/wm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/deep-blind-watermark-removal_patch/scripts/machines/VX.py", line 85, in forward
att_loss = self.attLoss(pred_ms, mask)
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/wm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/wm/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 530, in forward
return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/wm/lib/python3.8/site-packages/torch/nn/functional.py", line 2525, in binary_cross_entropy
return torch._C._nn.binary_cross_entropy(
RuntimeError: CUDA error: device-side assert triggered

Initially I thought that this might be due to errors in the input mask as it expects the values to be b/w 0 and 1. However, upon printing the values of Mask and pred_ms (prediction), it was found that the model prediction tensor was NaN.

mask tensor([[[[1., 1., 1.,  ..., 0., 0., 0.],
          [1., 1., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          ...,
          [1., 1., 1.,  ..., 0., 0., 0.],
          [1., 1., 1.,  ..., 0., 0., 0.],
          [1., 1., 1.,  ..., 0., 0., 0.]]],


        [[[0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          ...,
          [0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.]]],


        [[[0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          ...,
          [0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.]]],


        [[[0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          ...,
          [1., 0., 0.,  ..., 1., 1., 1.],
          [1., 1., 0.,  ..., 1., 1., 0.],
          [1., 1., 1.,  ..., 1., 0., 0.]]]], device='cuda:0')

pred_mask tensor([[[[nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          ...,
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan]]],


        [[[nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          ...,
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan]]],


        [[[nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          ...,
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan]]],


        [[[nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          ...,
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan]]]], device='cuda:0',
       grad_fn=<SigmoidBackward>)

Following is the code that I am using to Crop the patch:

from __future__ import print_function, absolute_import

import os
import csv
import numpy as np
import json
import random
import math
import matplotlib.pyplot as plt
from collections import namedtuple
from os import listdir
from os.path import isfile, join

import torch
# torch.manual_seed(17)
import torch.utils.data as data

from scripts.utils.osutils import *
from scripts.utils.imutils import *
from scripts.utils.transforms import *
import torchvision.transforms as transforms
from PIL import Image
from PIL import ImageEnhance
from PIL import ImageFilter
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True

import glob

class COCO(data.Dataset):
    def __init__(self,train,config=None, sample=[],gan_norm=False):

        self.train = []
        self.anno = []
        self.mask = []
        self.wm = []
        self.input_size = config.input_size
        self.normalized_input = config.normalized_input
        self.base_folder = config.base_dir
        self.dataset = train+config.data

        if config == None:
            self.data_augumentation = False
        else:
            self.data_augumentation = config.data_augumentation

        self.istrain = False if self.dataset.find('train') == -1 else True
        self.sample = sample
        self.gan_norm = gan_norm

        file_paths2 = sorted(glob.glob(join(self.base_folder,'wm_DIV2K','full_*',self.dataset,'image/*')))

        for fl2 in file_paths2:  
            file_name2 = fl2.split('/')[-1]
            self.train.append(fl2)
            self.mask.append(fl2.replace('/image/','/mask/'))
            self.wm.append(fl2.replace('/image/','/wm/'))
            self.anno.append(os.path.join(self.base_folder,'wm_DIV2K','natural',self.dataset,file_name2.split('-')[0]+'.'+file_name2.split('.')[-1]))

        if len(self.sample) > 0:
            self.train = [ self.train[i] for i in self.sample ] 
            self.mask = [ self.mask[i] for i in self.sample ] 
            self.anno = [ self.anno[i] for i in self.sample ] 

        self.trans = transforms.Compose([
                transforms.ToTensor(),
            ])

        print('total Dataset of '+self.dataset+' is : ', len(self.train))


    def __getitem__(self, index):
        img = Image.open(self.train[index]).convert('RGB')
        mask = Image.open(self.mask[index]).convert('L')
        anno = Image.open(self.anno[index]).convert('RGB')
        wm = Image.open(self.wm[index]).convert('RGB')

        W, H = img.size

        if W < self.input_size or H < self.input_size:
            img = img.resize((self.input_size, self.input_size))
            mask = mask.resize((self.input_size, self.input_size))
            anno = anno.resize((self.input_size, self.input_size))
            wm = wm.resize((self.input_size, self.input_size))
            

        i, j, h, w = transforms.RandomCrop.get_params(img, output_size=(self.input_size, self.input_size))
        img = transforms.functional.crop(img,i,j,h,w)
        mask = transforms.functional.crop(mask,i,j,h,w)
        anno = transforms.functional.crop(anno,i,j,h,w)
        wm = transforms.functional.crop(wm,i,j,h,w)
        
        
        img = self.trans(img)
        anno = self.trans(anno)
        mask = self.trans(mask)        
        wm = self.trans(wm)

        return {"image": img,
                "target": anno, 
                "mask": mask, 
                "wm": wm,
                "name": self.train[index].split('/')[-1],
                "imgurl":self.train[index],
                "maskurl":self.mask[index],
                "targeturl":self.anno[index],
                "wmurl":self.wm[index]
                }

    def __len__(self):

        return len(self.train)

Any Help would be appreciated!!

Try on my image

Hi, can I try it on watermarked image which I do not have mask. If so could you provide example on colab file removing watermark from uploaded image or from a test folder?

The results before and after running are different

image

code as follow:

from PIL import Image
img=Image.open('XX.jpg').resize((256,256))
image=np.array(img)
ih=torch.tensor(np.transpose(image,(2,0,1)))/255.0
ih=ih.unsqueeze(0)
imoutput,immask,imwatermark= model(ih.cuda())
imcoarser,imrefine,imwatermark = imoutput[1]immask + im(1-immask),imoutput[0]immask + im(1-immask),imwatermark*immask
torchvision.utils.save_image(imcoarser,'imrefine.jpg')

Other Pre-trained Models

Dear XiaoDong,

When will you be able to upload the pre-trained model on the grey image?

Look forward to your favourable reply.

Some question about dataset

Thanks for your impressive work.
I have some question about dataset in the onedrive link.
I guess:
10kmid.zip is LOGO-L
10khigh.zip is LOGO-H
10kgray.zip is LOGO-Gray
27kpng.zip is LOGO-30k
I wonder if this mapping is correct because there is some difference in naming.
Looking forward to your reply.

I can't find COCO_val2014.png

i tried your evaluate.sh. and downloaded your datasets, 10kgray, 10kmid
input "bash examples/evaluate.sh"
but got "FileNotFoundError: [Errno 2] No such file or directory: '/home/dell/watermark/10kgray/masks/natural/COCO_val2014.png'
What's "COCO_val2014.png"?

Supplementary Material

Kindly share your supplementary paper which has the model architecture changes shared mentioned in your paper. Thanks

Model unable to run on multiple GPUs

I have multiple GPUs and I would Like to train the model on them for faster training.
I see that you have already implemented MultiGPU training by using nn.DataParallel. There were some bugs in VX.py which were solved after i converted "self.model" to "self.model.module".

Yet after ensuring that i am using "CUDA_VISIBLE_DEVICES=0,1" i still see only GPU0 's memory to be filled and not GPU1's.

The model gives cuda out of memory if i try to use a input size >=512 with a batchsize of 12 or even 8.

Any idea why is it only consuming 1 of the 2 GPUs??

Thanks

on video ?

would this model be available on videos ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.