By Zijun Wei, Jianming Zhang, Zhe Lin, Joon-Young Lee, Niranjan Balasubramanian, Minh Hoai, Dimitris Samaras
in progress
This repo provides code structure for the experiments in "Learning Visual Emotion Representations From Web Data".
python 3
pytorch 1.0/0.4
tqdm
the main entrance code is in CNNs/mains
. basically all of the files in this directory are similar to each other. You can modify your own following
CNNs/mains/main_mclss_cross_entropy_v2.py
- a script defining the parameters (saved in
scripts
) if you're training a mutliple-label classification problem, please refer toAdobe_Selected690_MultiClass_CrossEntropy_tag_based_config.json
- The main entrance code is defined in mains. Please refer to mains/main_mclss_cross_entropy_v2.py for reference
- You need to prepare your own pkl file with the format
[relative_path, [labels]]
, you can to write your own data loader referring CNNs/datasets/multilabel.py
to execute the file you can jump into the mains directory do something like:
python main_mclass_cross_entropy_v2.py --config_file ../scripts/[your config file]
the same as training a model, modify the config file to set the corresponding parameters
pretrained model with softmax + embedding distance loss (on Google Drive.):
fine-tuned models (with all layers fixed except FC) on benchmark datasets:
See Dataset_release/SE30K8/ReadMe.md
for details
See Dataset_release/StockEmotion/ReadMe.md
for details
please cite:
@inproceedings{wei2020learning,
title={Learning Visual Emotion Representations From Web Data},
author={Wei, Zijun and Zhang, Jianming and Lin, Zhe and Lee, Joon-Young and Balasubramanian, Niranjan and Hoai, Minh and Samaras, Dimitris},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={13106--13115},
year={2020}
}```