h4nwei / spaq Goto Github PK
View Code? Open in Web Editor NEW[CVPR2020] Official SPAQ & Implementation
[CVPR2020] Official SPAQ & Implementation
I can not download model that includes BL_release.pt、MT-E_release.pt、MT-A_release.pt and MT-S_release.pt by the given url,please give me the other urls which can download the models,thank you very much.
ModuleNotFoundError: No module named 'Prepare_image'
Hello! We have recently launched and evaluated this algorithm on the dataset of our video quality metrics benchmark. The dataset distortions refer to compression artifacts on professional and user-generated content. Method took 12th place on the global leaderboard and 5th place on the no-reference-only leaderboard in the terms of SROCC. You can see more detailed results here. If you have any other video quality metric (either full-reference or no-reference) that you want to see in our benchmark, we kindly invite you to participate. You can submit it to the benchmark, following the submission steps, described here.
Hello, thank you for your work. I have a question about the data set.
As for the SPAQ data set proposed in the article, I would like to ask whether it belongs to the synthetic distortion data set or the real distortion data set. You're not very clear about the distinction, right?
您好,有个疑问。
您的推理代码中不包含model.eval(),
这样会存在同一张图片与不同图片组合输入model,结果不同
Hi,
Congratulations on your work. Do you have a plan to release training codes?
In the demo code BL_demo.py
In line 68, score_1 = self.model(image_1).mean()
If the output of the model is a scalar which is the score of the image, why the mean value is calculated as the score of the image?
I use two images with shape 910x512 and 560x420, the shapes of output are 32 and 24
Hello,
I was wondering if you provided the raw ratings obtained in the experiment (rating corresponding to each user and image)? If these are not available, would it be possible to have access to the variance of the ratings for each image? It would help compare the reliability of the subjective experiment to that of other studies.
Thanks,
Vlad
Hi Team,
May I know the dataset license terms ? Does it permit for free and commercial usage ?
Thanks!
hi, i want to reuse the dataset but cannot find the annotations in downloaded files. Could you tell me where to find it?
First thank you for sharing your work.
when I try a image with size 30x94, the program crashed. Is there any way to handle it ?
作者您好,我是一名最近开始学习神经网络的新手,阅读了您的这篇文章,并下载了您提供的资料内容,想要复现您的实验结果,但是不知道第一步该如何做,BL_demo.py双击也打不开,您可以简单指导一下吗,或者第一步该做些什么?谢谢了
It has some problem while downloading too many files from BaiduYun.
thx
We could't access google.com to obtain checkpoint file. Woud you please put the checkpoint file(BL_release.pt) to Baidu Yun?3Q!
During training, the images are first resized to 512 and then random cropped to $2242243$ OR resized to 256 then cropped to $2242243$?
I downloaded the dataset from this repository and found that it only contains TestImages. How can I access the training data, or am I misunderstanding something?
Thank you for your help!
please provide baidu link, thanks.
I am trying to test some images using the SPAQ BL_demo.py script, and I am supposedly getting this following error -
main()
File "BL_demo.py", line 100, in main
t.predit_quality()
File "BL_demo.py", line 64, in predit_quality
image_1 = self.prepare_image(Image.open(self.config.image_1).convert("RGB"))
File "/Users/image-quality-enhancement/SPAQ/SPAQ/Prepare_image.py", line 15, in __call__
return self.generate_patches(image, input_size=self.stride)
File "/Users/image-quality-enhancement/SPAQ/SPAQ/Prepare_image.py", line 36, in generate_patches
img = self.to_numpy(image)
File "/Users/image-quality-enhancement/SPAQ/SPAQ/Prepare_image.py", line 32, in to_numpy
p = image.numpy()
File "/Users/choprahetarth/opt/anaconda3/envs/price/lib/python3.7/site-packages/PIL/Image.py", line 546, in __getattr__
raise AttributeError(name)
AttributeError: numpy
Can you please tell me which sorts of images are accepted as input as I am passing .jpeg images (some work, while others don't).
System specs -
Python 3.8
MacOS Catalina i3
(yes i have removed the .cuda() wherever it is required in the script so that it works on cpu)
预训练模型bug!!!
RuntimeError: Error(s) in loading state_dict for Baseline:
Missing key(s) in state_dict: "backbone.conv1.weight", "backbone.bn1.weight", "backbone.bn1.bias", "backbone.bn1.running_mean", "backbone.bn1.running_var", "backbone.layer1.0.conv1.weight", "backbone.layer1.0.bn1.weight", "backbone.layer1.0.bn1.bias", "backbone.layer1.0.bn1.running_mean", "backbone.layer1.0.bn1.running_var", "backbone.layer1.0.conv2.weight", "backbone.layer1.0.bn2.weight", "backbone.layer1.0.bn2.bias", "backbone.layer1.0.bn2.running_mean", "backbone.layer1.0.bn2.running_var", "backbone.layer1.0.conv3.weight", "backbone.layer1.0.bn3.weight", "backbone.layer1.0.bn3.bias", "backbone.layer1.0.bn3.running_mean", "backbone.layer1.0.bn3.running_var", "backbone.layer1.0.downsample.0.weight", "backbone.layer1.0.downsample.1.weight", "backbone.layer1.0.downsample.1.bias", "backbone.layer1.0.downsample.1.running_mean", "backbone.layer1.0.downsample.1.running_var", "backbone.layer1.1.conv1.weight", "backbone.layer1.1.bn1.weight", "backbone.layer1.1.bn1.bias", "backbone.layer1.1.bn1.running_mean", "backbone.layer1.1.bn1.running_var", "backbone.layer1.1.conv2.weight", "backbone.layer1.1.bn2.weight", "backbone.layer1.1.bn2.bias", "backbone.layer1.1.bn2.running_mean", "backbone.layer1.1.bn2.running_var", "backbone.layer1.1.conv3.weight", "backbone.layer1.1.bn3.weight", "backbone.layer1.1.bn3.bias", "backbone.layer1.1.bn3.running_mean", "backbone.layer1.1.bn3.running_var", "backbone.layer1.2.conv1.weight", "backbone.layer1.2.bn1.weight", "backbone.layer1.2.bn1.bias", "backbone.layer1.2.bn1.running_mean", "backbone.layer1.2.bn1.running_var", "backbone.layer1.2.conv2.weight", "backbone.layer1.2.bn2.weight", "backbone.layer1.2.bn2.bias", "backbone.layer1.2.bn2.running_mean", "backbone.layer1.2.bn2.running_var", "backbone.layer1.2.conv3.weight", "backbone.layer1.2.bn3.weight", "backbone.layer1.2.bn3.bias", "backbone.layer1.2.bn3.running_mean", "backbone.layer1.2.bn3.running_var", "backbone.layer2.0.conv1.weight", "backbone.layer2.0.bn1.weight", "backbone.layer2.0.bn1.bias", "backbone.layer2.0.bn1.running_mean", "backbone.layer2.0.bn1.running_var", "backbone.layer2.0.conv2.weight", "backbone.layer2.0.bn2.weight", "backbone.layer2.0.bn2.bias", "backbone.layer2.0.bn2.running_mean", "backbone.layer2.0.bn2.running_var", "backbone.layer2.0.conv3.weight", "backbone.layer2.0.bn3.weight", "backbone.layer2.0.bn3.bias", "backbone.layer2.0.bn3.running_mean", "backbone.layer2.0.bn3.running_var", "backbone.layer2.0.downsample.0.weight", "backbone.layer2.0.downsample.1.weight", "backbone.layer2.0.downsample.1.bias", "backbone.layer2.0.downsample.1.running_mean", "backbone.layer2.0.downsample.1.running_var", "backbone.layer2.1.conv1.weight", "backbone.layer2.1.bn1.weight", "backbone.layer2.1.bn1.bias", "backbone.layer2.1.bn1.running_mean", "backbone.layer2.1.bn1.running_var", "backbone.layer2.1.conv2.weight", "backbone.layer2.1.bn2.weight", "backbone.layer2.1.bn2.bias", "backbone.layer2.1.bn2.running_mean", "backbone.layer2.1.bn2.running_var", "backbone.layer2.1.conv3.weight", "backbone.layer2.1.bn3.weight", "backbone.layer2.1.bn3.bias", "backbone.layer2.1.bn3.running_mean", "backbone.layer2.1.bn3.running_var", "backbone.layer2.2.conv1.weight", "backbone.layer2.2.bn1.weight", "backbone.layer2.2.bn1.bias", "backbone.layer2.2.bn1.running_mean", "backbone.layer2.2.bn1.running_var", "backbone.layer2.2.conv2.weight", "backbone.layer2.2.bn2.weight", "backbone.layer2.2.bn2.bias", "backbone.layer2.2.bn2.running_mean", "backbone.layer2.2.bn2.running_var", "backbone.layer2.2.conv3.weight", "backbone.layer2.2.bn3.weight", "backbone.layer2.2.bn3.bias", "backbone.layer2.2.bn3.running_mean", "backbone.layer2.2.bn3.running_var", "backbone.layer2.3.conv1.weight", "backbone.layer2.3.bn1.weight", "backbone.layer2.3.bn1.bias", "backbone.layer2.3.bn1.running_mean", "backbone.layer2.3.bn1.running_var", "backbone.layer2.3.conv2.weight", "backbone.layer2.3.bn2.weight", "backbone.layer2.3.bn2.bias", "backbone.layer2.3.bn2.running_mean", "backbone.layer2.3.bn2.running_var", "backbone.layer2.3.conv3.weight", "backbone.layer2.3.bn3.weight", "backbone.layer2.3.bn3.bias", "backbone.layer2.3.bn3.running_mean", "backbone.layer2.3.bn3.running_var", "backbone.layer3.0.conv1.weight", "backbone.layer3.0.bn1.weight", "backbone.layer3.0.bn1.bias", "backbone.layer3.0.bn1.running_mean", "backbone.layer3.0.bn1.running_var", "backbone.layer3.0.conv2.weight", "backbone.layer3.0.bn2.weight", "backbone.layer3.0.bn2.bias", "backbone.layer3.0.bn2.running_mean", "backbone.layer3.0.bn2.running_var", "backbone.layer3.0.conv3.weight", "backbone.layer3.0.bn3.weight", "backbone.layer3.0.bn3.bias", "backbone.layer3.0.bn3.running_mean", "backbone.layer3.0.bn3.running_var", "backbone.layer3.0.downsample.0.weight", "backbone.layer3.0.downsample.1.weight", "backbone.layer3.0.downsample.1.bias", "backbone.layer3.0.downsample.1.running_mean", "backbone.layer3.0.downsample.1.running_var", "backbone.layer3.1.conv1.weight", "backbone.layer3.1.bn1.weight", "backbone.layer3.1.bn1.bias", "backbone.layer3.1.bn1.running_mean", "backbone.layer3.1.bn1.running_var", "backbone.layer3.1.conv2.weight", "backbone.layer3.1.bn2.weight", "backbone.layer3.1.bn2.bias", "backbone.layer3.1.bn2.running_mean", "backbone.layer3.1.bn2.running_var", "backbone.layer3.1.conv3.weight", "backbone.layer3.1.bn3.weight", "backbone.layer3.1.bn3.bias", "backbone.layer3.1.bn3.running_mean", "backbone.layer3.1.bn3.running_var", "backbone.layer3.2.conv1.weight", "backbone.layer3.2.bn1.weight", "backbone.layer3.2.bn1.bias", "backbone.layer3.2.bn1.running_mean", "backbone.layer3.2.bn1.running_var", "backbone.layer3.2.conv2.weight", "backbone.layer3.2.bn2.weight", "backbone.layer3.2.bn2.bias", "backbone.layer3.2.bn2.running_mean", "backbone.layer3.2.bn2.running_var", "backbone.layer3.2.conv3.weight", "backbone.layer3.2.bn3.weight", "backbone.layer3.2.bn3.bias", "backbone.layer3.2.bn3.running_mean", "backbone.layer3.2.bn3.running_var", "backbone.layer3.3.conv1.weight", "backbone.layer3.3.bn1.weight", "backbone.layer3.3.bn1.bias", "backbone.layer3.3.bn1.running_mean", "backbone.layer3.3.bn1.running_var", "backbone.layer3.3.conv2.weight", "backbone.layer3.3.bn2.weight", "backbone.layer3.3.bn2.bias", "backbone.layer3.3.bn2.running_mean", "backbone.layer3.3.bn2.running_var", "backbone.layer3.3.conv3.weight", "backbone.layer3.3.bn3.weight", "backbone.layer3.3.bn3.bias", "backbone.layer3.3.bn3.running_mean", "backbone.layer3.3.bn3.running_var", "backbone.layer3.4.conv1.weight", "backbone.layer3.4.bn1.weight", "backbone.layer3.4.bn1.bias", "backbone.layer3.4.bn1.running_mean", "backbone.layer3.4.bn1.running_var", "backbone.layer3.4.conv2.weight", "backbone.layer3.4.bn2.weight", "backbone.layer3.4.bn2.bias", "backbone.layer3.4.bn2.running_mean", "backbone.layer3.4.bn2.running_var", "backbone.layer3.4.conv3.weight", "backbone.layer3.4.bn3.weight", "backbone.layer3.4.bn3.bias", "backbone.layer3.4.bn3.running_mean", "backbone.layer3.4.bn3.running_var", "backbone.layer3.5.conv1.weight", "backbone.layer3.5.bn1.weight", "backbone.layer3.5.bn1.bias", "backbone.layer3.5.bn1.running_mean", "backbone.layer3.5.bn1.running_var", "backbone.layer3.5.conv2.weight", "backbone.layer3.5.bn2.weight", "backbone.layer3.5.bn2.bias", "backbone.layer3.5.bn2.running_mean", "backbone.layer3.5.bn2.running_var", "backbone.layer3.5.conv3.weight", "backbone.layer3.5.bn3.weight", "backbone.layer3.5.bn3.bias", "backbone.layer3.5.bn3.running_mean", "backbone.layer3.5.bn3.running_var", "backbone.layer4.0.conv1.weight", "backbone.layer4.0.bn1.weight", "backbone.layer4.0.bn1.bias", "backbone.layer4.0.bn1.running_mean", "backbone.layer4.0.bn1.running_var", "backbone.layer4.0.conv2.weight", "backbone.layer4.0.bn2.weight", "backbone.layer4.0.bn2.bias", "backbone.layer4.0.bn2.running_mean", "backbone.layer4.0.bn2.running_var", "backbone.layer4.0.conv3.weight", "backbone.layer4.0.bn3.weight", "backbone.layer4.0.bn3.bias", "backbone.layer4.0.bn3.running_mean", "backbone.layer4.0.bn3.running_var", "backbone.layer4.0.downsample.0.weight", "backbone.layer4.0.downsample.1.weight", "backbone.layer4.0.downsample.1.bias", "backbone.layer4.0.downsample.1.running_mean", "backbone.layer4.0.downsample.1.running_var", "backbone.layer4.1.conv1.weight", "backbone.layer4.1.bn1.weight", "backbone.layer4.1.bn1.bias", "backbone.layer4.1.bn1.running_mean", "backbone.layer4.1.bn1.running_var", "backbone.layer4.1.conv2.weight", "backbone.layer4.1.bn2.weight", "backbone.layer4.1.bn2.bias", "backbone.layer4.1.bn2.running_mean", "backbone.layer4.1.bn2.running_var", "backbone.layer4.1.conv3.weight", "backbone.layer4.1.bn3.weight", "backbone.layer4.1.bn3.bias", "backbone.layer4.1.bn3.running_mean", "backbone.layer4.1.bn3.running_var", "backbone.layer4.2.conv1.weight", "backbone.layer4.2.bn1.weight", "backbone.layer4.2.bn1.bias", "backbone.layer4.2.bn1.running_mean", "backbone.layer4.2.bn1.running_var", "backbone.layer4.2.conv2.weight", "backbone.layer4.2.bn2.weight", "backbone.layer4.2.bn2.bias", "backbone.layer4.2.bn2.running_mean", "backbone.layer4.2.bn2.running_var", "backbone.layer4.2.conv3.weight", "backbone.layer4.2.bn3.weight", "backbone.layer4.2.bn3.bias", "backbone.layer4.2.bn3.running_mean", "backbone.layer4.2.bn3.running_var", "backbone.fc.weight", "backbone.fc.bias".
Unexpected key(s) in state_dict: "state_dict", "test_results", "epoch", "train_loss", "optimizer".
Hi, thanks for your nice work, I have a question about mos and image attribute score of the dataset?
The mos value is subjective as it is given by volunteer while how to get other image attribute scores such as brightness, colourfulness, and so on?
Hello,
Your article is really interesting, and I would love to access the database. However, the download process is really complicated. You need to download Baidu and then create an account and it's all in Chinese, which is not convenient for non-Chinese speaking users or non-Baidu users (especially if you don't want to create random accounts).
Is it possible to put a link to a separate drive? like, google drive?
Thank you!
Best,
Nicolas
Great Works.
I am really interested in the subwork of multi-task learning from scene labels,and I think it may of great help for my work. However, it may be really difficult for me to reproduce its training codes, especially about the loss functions, as in your test codes I can't find it, could you please release the training code of multi-learning from scene labels.
Thank you sincerely
The paper says:
we randomly sample 80% of the images in SPAQ for training and leave the rest for testing.
However, I want to re-create the test results from the paper. Will it be possible to provide the list of test images used for producing evaluation results in the paper?
请问如果使用MT-A和提供预训练的模型直接在SPAQ数据集上测试,可以得到出文章里MT-A SRCC=0.916的指标吗?
Hello, I have read your great paper and tried to download the spaq database to reproduce the experiment results, but as the baiduyun for SPAQ consists of only individual image files, it is difficult to download more than 10000 files at once. So , I would like to know whether it is possible for you to upload one or several zipped file of the database or just release a google drive link of the database ? As the download speed of MEGA free account is real show.
Thanks in advance!
Hello,
In your paper, you mention that 1000 images were collected from different scenes shot with the same devices.
Is there a way to extract these scenes/images?
I was thinking of using a clustering method but it can still result in bad clusters.
Thank you
Nicolas
I use the same version of your requirements.
The output score must have some error, for instance the picture 00006.jpg
in the dataset whose scores are
MOS: 33.17
Bright : 23.83
Colorful: 48
Contrast: 46
Noise: 63
Sharpness: 47
When using the pretrained weights of BL-demo, the output score is 54.924
and using the pretrained weights of MT_A, the outputs are none of sense:
54.965328216552734
1 --> 60.083614349365234
2 --> 61.15263366699219
3 --> 51.99977111816406
4 --> 64.30211639404297
5 --> 57.21396255493164
Due to the L1 loss and the powerful performance of resnet, I wonder why it happens? I strongly doubt the reliability and accuracy of the experiments in the paper. PLZ solve my doubt.
This problem was mentioned in #3 where you can't explain the reason.
请问您保存的模型是采用保存模型结构+模型参数还是只是保存模型参数呢?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.