RAM > 10G
GPU
To train a model, run train.py
with the path to the configs.
The example commands below show how to run
distributed training.
python -m torch.distributed.launch --nproc_per_node=1 --nnodes=1 --node_rank=0 train.py configs/wf_mbf
config.num_classes = 10572 config.num_image = 501196
- WebFace0.5M (10572 IDs, 501196 images)
!conda install -y gdown
!pip install -U --no-cache-dir gdown --pre
!git clone https://github.com/DuyNguDao/ArcFace_Pytorch.git
Note: change id gdrive of dataset
!gdown --id 1yaLoTdjybeLtXLsODjsJCqPRGvlk2a9k
!unzip /kaggle/working/faces_webface_112x112.zip
%load ./configs/wf4m_mbf.py
Change:
+ config.rec = "path of dataset"
+ config.num_classes = xxx
+ config.image = xxx
+ config.epoch = xxx
+ config.batch_size = xxx
+ config.val_targets = ['lfw', "agedb_30"]: datatest (lfw, fcp-cp, agedb_30)
writefile:
%%writefile ./configs/wf4m_mbf.py
!pip install -r requirement.txt
!python -m torch.distributed.launch --nproc_per_node=1 --nnodes=1 --node_rank=0 train.py configs/wf4m_mbf
!zip -r result.zip ./work_dirs