GithubHelp home page GithubHelp logo

vztu / videval Goto Github PK

View Code? Open in Web Editor NEW
123.0 6.0 19.0 22.6 MB

[IEEE TIP'2021] "UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content", Zhengzhong Tu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, Alan C. Bovik

Home Page: https://ieeexplore.ieee.org/document/9405420

License: MIT License

MATLAB 61.79% Python 10.18% C 11.30% CSS 0.11% HTML 16.38% Shell 0.24%
dataset evaluation feature-extraction bvqa-model vqa iqa video-quality-assessment image-quality-assessment user-generated-content

videval's Introduction

videval's People

Contributors

vztu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

videval's Issues

sklearn error

  1. scikit-learn 0.20.3 cannot be installed in python3.8 enviroment
  2. when install scikit-learn latest version, when execute program it generate error about sklearn and joblib

please tell me which env you use! thanks you!

Can run in Octave?

Thanks for your great works! Can this model run in Octave not in Matlab?

About the feature selection

Hi,

I wonder if we need first normalize the features before using the sequential feature selection. As the regression model takes SVR, which requires the normalized input.

I checked the source code. The SFS will not automatically call the minmax norm within it. I notice in your script the SFS input is not normalized. Why ?

New dataset

Thank you very much for your great work, it is very helpful, I want to test the model on other datasets can you please guide me how to to train it on my dataset please ?, thanks in advance!

Live-VQC dataset

Thanks for your great work. It helps me so much! I tried to download the Live-VQC dataset, but failed to do so every time. Could you please provide me with another way to download the live-VQC dataset? Thank you very much.

Two questions about the codes.

  1. Maybe there is a line 'prev_YUV_frame = imresize(prev_YUV_frame, ratio);' lost in VIDEVAL/include/calc_VIDEVAL_feats_light.m line64-66? I found this problem when run this code on LIVE-VQC dataset.
  2. Whether there is a mistake in VIDEVAL/demo_eval_BVQA_feats_one_dataset.py line 167 function curvefit? I think the two params 'y_param_valid_pred, y_param_valid' should be 'y_param_train_pred, y_param_train' instead compared with line 217-226.

pooling strategy

I wonder when we study "Effects of Temporal Pooling", if we pool the features or the scores ?
If scores, how do we get the predicted score of every frame?
If features, how do we pool the features, if do to every dimension of features, eg. 36 dimension in Brisque?

Library not loaded: @loader_path/libmex.dylib

When I run demo_compute_VIDEVAL_feats.m, there is an error:
dlopen(/Users/XXX/VIDEVAL_release/include/matlabPyrTools/corrDn.mexmaci64,
6): Library not loaded: @loader_path/libmex.dylib

Can you give me some help?

A question for the grid search

Hi Zhengzhong,

I noticed in your evaluate script (demo_eval_BVQA_feats_all_combined.py), a grid search is used to obtain a best [gamma, c] for each iteration.

This may bring a problem: for each iteration (the 8-2 division) the best gamma c will be different.

If we have 100 params pairs, and 100 iterations. Then we should have a 100x100 searching matrix, naming K. Usually I take the median for each row (i.e. each iteration with a fixed param pair), then the max value to select the best param pair.

It takes the np.max(np.median(K, 1)) as the final result, here the np denotes numpy.

The current script essentially takes the np.median(np.max(K, 0)).

I don't think these two ways will make the same results ---- the latter one may offer a better score but it can not tell us what's the best params in the whole dataset.

Please correct me if I misunderstand anything....

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.