One Unified Model for Image Quality Assessment (IQA), Image Aesthetic Assessment (IAA), and Video Quality Assessment (VQA).
If you only need to infer (or evaluate):
git clone https://github.com/Q-Future/Q-Align.git
cd Q-Align
pip install -e .
For training, you need to further install additional dependencies as follows:
pip install -e ".[train]"
pip install flash_attn --no-build-isolation
- CLI Interface
export DEFAULT_IMG_PATH=fig/singapore_flyer.jpg
python q_align/evaluate/scorer.py --img_path $DEFAULT_IMG_PATH
- Python API
from q_align.evaluate.scorer import QAlignScorer
from PIL import Image
scorer = QAlignScorer()
img_list = [Image.open("fig/singapore_flyer.jpg")] # can be multiple images
print(scorer(img_list).tolist())
- CLI Interface
export DEFAULT_IMG_PATH=fig/singapore_flyer.jpg
python q_align/evaluate/scorer.py --img_path $DEFAULT_IMG_PATH --aesthetic --model-path q-future/one-align
- Python API
from q_align.evaluate.scorer import QAlignAestheticScorer
from PIL import Image
scorer = QAlignAestheticScorer()
img_list = [Image.open("fig/singapore_flyer.jpg"), Image.open("fig/boy_colorful.png")] # can be multiple images
print(scorer(img_list).tolist())
- CLI Interface
export DEFAULT_IMG_PATH=fig/baby.mp4
python q_align/evaluate/scorer.py --img_path $DEFAULT_IMG_PATH --video --model-path q-future/one-align
- Python API
from q_align.evaluate.scorer import QAlignVideoScorer, load_video
scorer = QAlignVideoScorer()
video_list = [load_video("fig/baby.mp4")]
print(scorer(video_list).tolist())
Download all datasets needed together.
import os, glob
from huggingface_hub import snapshot_download
snapshot_download("q-future/q-align-datasets", repo_type="dataset", local_dir="./playground/data", local_dir_use_symlinks=False)
gz_files = glob.glob("playground/data/*.tar")
for gz_file in gz_files:
print(gz_file)
os.system("tar -xf {} -C ./playground/data/".format(gz_file))
For LSVQ, (video quality dataset, optional), you can download as follows:
import os, glob
from huggingface_hub import snapshot_download
snapshot_download("teowu/LSVQ-videos", repo_type="dataset", local_dir="./playground/data/lsvq/", local_dir_use_symlinks=False)
gz_files = glob.glob("playground/data/lsvq/*.tar.gz")
for gz_file in gz_files:
print(gz_file)
os.system("tar -xzf {} -C ./playground/data/lsvq/".format(gz_file))
After preparing the datasets, you can evaluate pre-trained OneAlign as follows:
- Image Quality Assessment (IQA)
python q_align/evaluate/iqa_eval.py --model_path q-future/one-align --device cuda:0
- Image Aesthetic Assessment (IAA)
python q_align/evaluate/iaa_eval.py --model_path q-future/one-align --device cuda:0
- Video Quality Assessment (VQA)
python q_align/evaluate/vqa_eval.py --model_path q-future/one-align --device cuda:0
We will release other pre-trained checkpoints soon.
- Training Q-Align with KonIQ-10k:
sh scripts/l1_koniq.sh
- Training Q-Align with mixture of KonIQ-10k, SPAQ, and KADID-10k:
sh scripts/iqa_mix.sh
- Training Q-Align Aesthetic Predictor with AVA dataset:
sh scripts/l1_ava.sh
- Training Q-Align Aesthetic Predictor with AVA dataset:
sh scripts/l1_lsvq.sh
At least 4*A6000 GPUs or 2*A100 GPUs will be enough for the training.
- Training OneAlign with IQA datasets, AVA dataset (IAA) and LSVQ dataset (VQA):
sh scripts/all_.sh
At least 8*A6000 GPUs or 4*A100 GPUs will be enough for the training.
Please contact any of the first authors of this paper for queries.
- Haoning Wu, [email protected], @teowu
- Zicheng Zhang, [email protected], @zzc-1998
We sincerely thank Dr Weixia Zhang (@onionbao) and Dr Chaofeng Chen (@chaofenghust) for their assistance with experiments and advice on this project.
@article{wu2023qalign,
title={Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels},
author={Wu, Haoning and Zhang, Zicheng and Zhang, Weixia and Chen, Chaofeng and Li, Chunyi and Liao, Liang and Wang, Annan and Zhang, Erli and Sun, Wenxiu and Yan, Qiong and Min, Xiongkuo and Zhai, Guangtai and Lin, Weisi},
journal={arXiv preprint arXiv:2312.xxxxx},
year={2023},
institution={Nanyang Technological University and Shanghai Jiao Tong University and Sensetime Research},
note={Equal Contribution by Wu, Haoning and Zhang, Zicheng. Project Lead by Wu, Haoning. Corresponding Authors: Zhai, Guangtai and Lin, Weisi.}
}