leftthomas / espcn Goto Github PK
View Code? Open in Web Editor NEWA PyTorch implementation of ESPCN based on CVPR 2016 paper "Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network"
A PyTorch implementation of ESPCN based on CVPR 2016 paper "Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network"
The suggested model is a bit different than the on in the paper. You got 4 convolutional layers, whereas, there are only 3 in the paper. Have you tried less? Why do you have sigmoid in the output layer?
Thank you very much!
Idan
hello,I want to ask about "python -m visdom.server & python train.py",Whether it's one statement or two statements?If I run it alone "python -m visdom.server ",I will receive "You can navigate to http://localhost:8097"Then it is necessary to keep the terminal page open, reopen a terminal and run "python train.py" again?
在运行完python -m visdom server & python train.py后出来了visdom可视化界面,但是没有出现任何数据,只有顶部的导航栏和蓝底,没有train.py里写的向visdom界面生成训练进度等数据的可视化窗口,想问问作者这样的情况该怎么解决呢?感谢!
interesting
running the imageSR part without visdom,how to save the PSNR AND LOSS of each iter?
Is there a Pre-train model?
hi,thank you for your excellent work ! Could you give pretrained model for test ?
Hello, when I ran: $python test_video.py, I got an error about video 爱情废柴.mp4
[mpeg2video @ 0x355f2a20] MPEG1/2 does not support 15/1 fps
Could not open codec 'mpeg2video': Unspecified error
The others were right, I don't know what's matter with this video, please help me, thanks!
First of all ,thank you very much for your hard work to replicate the program!
The training set used in the original paper is 91 high-definition images, and the effect achieved by [×3_SR] is Set5—32.55dB.
The training set you used is 16700 images, and the reproduced effect is Set5—31.51dB. Why is the effect worse when using a larger training set?
And I found that the highest value of your program is Set5-34.09dB is higher than the original paper, so I guess the original paper uses the highest value as the result?
I am using your code to reproduce the experimental results of the ESPCN paper, but I find that the results are quite different from those in the paper. Therefore, I speculate whether the input image only has y value and no CB and Cr value into the model, which will lead to the final image can not achieve the effect of the paper?
this is my results:
TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first
As the title, all baidu yun url are missing, and I try to use pre-train model but have "UnpicklingError: invalid load key, '\xef'" problem when loading model. Could you release those data again? Or could you send those data to my email [email protected]?
p.s. My platform is windows and use python 3.6.2 & pytorch 0.4.0. (windows don't have pytorch 0.3.1)
RuntimeError: cuda runtime error (48) : no kernel image is available for execution on the device at c:\programdata\miniconda3\conda-bld\pytorch_1532505617613\work\aten\src\thc\generic/THCTensorMathPointwise.cu:59
0%| | 0/261 [00:23<?, ?it/s]
Hi,
Could you provide alternative download links for the dataset as its difficult to download from baidu.
Maybe a shared google drive link will be helpful.
THanks,
Pramod
hi,
what are the license terms for pixelsuffle()?
can i apply espcn(w/ torch.nn.module.pixelsuffle) to my service for the commercial use?
Thanks
Hi ! When I run the test_image.py to load the given model I met the error:_pickle.UnpicklingError: invalid load key, '\xef'. I tried CSAILVision/places365#25 and set the encoding to be utf-8 or unicode but it doesn't work. I hope anyone could give me any advice? Thanks in advance!
hi, I use the test_video.py to test the real-time videoSR performance of the model, when I test the 240P video it works just fine, but when it comes to 360P video, the fps reduced to 10.
I notice that the GPU utilization is only about 10%, but the CPU is at maximum capacity.
image = Variable(ToTensor()(y)).view(1, -1, y.size[1], y.size[0])
This line of code uses up most of CPU, and
out = out.cpu()
This line takes up most of the time. It takes 20ms to process while the model only takes 4ms. I thought it uses so much time because it needs to clean up the GPU ram, and I tried torch.cuda.empty_cache()
before that and the time .cpu()
consumed went down but the whole running time didn't change. I don't know how to tackle this problem, and I don't know what .cpu()
does except copying data from GPU. I can really use some help. ( Or this is exactly how it works and I got nothing to do except using a better CPU?)
My CPU is i7-9700K and my GPU is RTX2080Ti. I also want to know the CPU you used for testing and the fps of the real-time VSR you can get.
Thanks in advance!!
Hello,
MSU Video Group has recently launched Video Super Resolution Benchmark and evaluated this algorithm.
It takes 14th place by subjective score, 8th place by PSNR, and 14th by our metric ERQAv1.0. You can see the results here.
If you have any other VSR method you want to see in our benchmark, we kindly invite you to participate.
You can submit it for the benchmark, following the submission steps.
When I ran test_image.py, an error occurred: FileNotFoundError: [Errno 2] No such file or directory: 'epochs/epoch_3_100.pt'.
Epoch_3_100.pt could not be found, please help to solve it, thank you very much.
[WinError 3] 系统找不到指定的路径。: 'data/test/SRF_3/data/'
thanks for your nice work firstly, but the test video url has been invalid. Could you offer a new url of that?
Or could you send these test videos to my email [email protected]?
Thanks!
Why can't my Google Chrome access this URL?
It prompts:This site can’t be reached。localhost refused to connect.
what can i do thank you
>>> import torch
>>>torch.load('epochs/epoch_3_100.pt')
<open file 'epochs/epoch_3_100.pt', mode 'rb' at 0x7f7e3eb91e40>
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/torch/serialization.py", line 231, in load
return _load(f, map_location, pickle_module)
File "/usr/local/lib/python2.7/dist-packages/torch/serialization.py", line 369, in _load
magic_number = pickle_module.load(f)
cPickle.UnpicklingError: invalid load key, '?'.
>>>
I tried to use the existing model in epochs folder, but got the error
Thanks
Thanks for your work! But I have a tiny question for image normalization.
Line 45 in b1dd698
Hello:
This work is wonderful and I wonder if I could get an on-hand trained model for this work. This will help me a lot. Thanks.
哥们,
有能下载的训练数据集链接吗?
发我邮箱也行:[email protected]
万分感谢!膜拜!
I am a pytorch beginner
When python test_image.py. There was an error.
File "test_image.py", line 29, in
model.load_state_dict(torch.load('epochs/' + MODEL_NAME))
File "/home//anaconda3/envs/pytorch/lib/python3.5/site-packages/torch/serialization.py", line 229, in load
return _load(f, map_location, pickle_module)
File "/home//anaconda3/envs/pytorch/lib/python3.5/site-packages/torch/serialization.py", line 367, in _load
magic_number = pickle_module.load(f)
_pickle.UnpicklingError
What can I do to solve it?Thank you for your help!
hello,I want to know where your model download ,thank you.
Firstly thanks for you work
When ues python3 load the epoch_3_100.pt, it comes _pickle.UnpicklingError: invalid load key, '\xef'.
How can i reslove this problem ?
Hi, I was wondering how to draw the loss/psnr graph?
The paper blurs HR using a Gaussian filter and sub-sample it. Is the Gaussian kernel implemented in the code? I just see bicubic.
I build the python environment with anaconda, using python 3.7.13.When I run the image test cmd, there was an error reported.I search it with google, and find a solution with installing 'pillow 6.1.0'.Then I run the training cmd,there was an other error reported:
Traceback (most recent call last): File "train.py", line 116, in <module> engine.train(processor, train_loader, maxepoch=NUM_EPOCHS, optimizer=optimizer) File "/home/lxp/anaconda3/envs/hh_espcn/lib/python3.7/site-packages/torchnet/engine/engine.py", line 63, in train state['optimizer'].step(closure) File "/home/lxp/anaconda3/envs/hh_espcn/lib/python3.7/site-packages/torch/optim/adam.py", line 58, in step loss = closure() File "/home/lxp/anaconda3/envs/hh_espcn/lib/python3.7/site-packages/torchnet/engine/engine.py", line 56, in closure self.hook('on_forward', state) File "/home/lxp/anaconda3/envs/hh_espcn/lib/python3.7/site-packages/torchnet/engine/engine.py", line 31, in hook self.hooks[name](state) File "train.py", line 45, in on_forward meter_loss.add(state['loss'].data[0]) IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number
I search this again,and found an solution is change this code:
meter_loss.add(state['loss'].data[0])
to:
meter_loss.add(state['loss'].item())
After that the training process can be finished.I got the model file 'epoch_3_100.pt' in epoch directory.
However, when I use this model to test images, I got very strange effect like this:
Did i do anything wrong in these steps?
之前的记录将这个删去了 现在应该怎么获得
Is the video you provided real - time display?
And if so, what gpu?
hello,
i can access train/val dataset download link, but it doesn't exist.
how can i download it?
1、训练集数据必须要使用VOC2012数据集吗
2、假设我测试时用的是摄像头捕获的视频或者说只有720P的视频能够达到实时增强的效果吗
I'm afraid I only find the link for dataset but no pretrained models.
Good evening! I am very sorry for disturbing you! I don't know where is the train.h5 file. Can you tell me where this file can go? Thank you.
My CUDA is 10.0, Python is 3.6, GPU computing power is 5.0, tensorflow version is 1.14.0, running train.py is wrong. Runtime Error: CUDA runtime error (8): invalid device function Another thing I don't understand is, when I run train. py, what is the number of data I read in 261, and where does 261 come from? 0%| | 0/261 [00:00<?,? It/s]
Download links in Baidu redirect to a cloud application, which require an account to use, which require a Chinese mobile number to set up. Would it be possible to have these datasets in a more accessible place?
Is your model damaged? Now I need to train myself to use it, right?
This is what I get:
File "test_image.py", line 29, in
model.load_state_dict(torch.load('epochs/' + MODEL_NAME))
File "/home/a80050185/anaconda3/envs/py2.7/lib/python2.7/site-packages/torch/serialization.py", line 267, in load
return _load(f, map_location, pickle_module)
File "/home/a80050185/anaconda3/envs/py2.7/lib/python2.7/site-packages/torch/serialization.py", line 410, in _load
magic_number = pickle_module.load(f)
cPickle.UnpicklingError: invalid load key, '�'.
Please help
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.