GithubHelp home page GithubHelp logo

cosface_pytorch's Introduction

CosFace_pytorch

Pytorch implementation of CosFace


  • Deep Learning Platform: PyTorch 0.4.1
  • OS: CentOS Linux release 7.5
  • Language: Python 2.7
  • CUDA: 8.0


Result(new)

Single model trained on CAISA-WebFace achieves ~99.2% accuracy on LFW (Link: https://pan.baidu.com/s/1uOBATynzBTzZwrIKC4kcAA Password: 69e6)

Note: Pytorch 0.4 seems to be very different from 0.3, which leads me to not fully reproduce the previous results. Currently still adjusting parameters....

The initialization of the fully connected layer does not use Xavier but is more conducive to model convergence.

Result(old)

Network Hyper-parameter    Accuracy on LFW
Sphere20 s=30, m=0.35 99.08%
Sphere20 s=30, m=0.40 99.23%
LResnet50E-IR(In ArcFace paper) s=30, m=0.35 99.45%

cosface_pytorch's People

Contributors

mugglewang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

cosface_pytorch's Issues

About the title

I think this project is about 'SphereFace', not 'CosFace'.
CosFace use LMCL loss function, not A-softmax loss.

Can not reproduce the accuracy

I'm very confused that I used the same code but I can't get the same result. My pytorch version is 0.4. Can you share your training tricks?
best wishes!

CASIA-WebFace and LFW

Do you happend to have Casia-Webface and LFW dataset already preprocess, used for this experiments and share a link to download? Casia-Webface is hard to find. Will mean a lot if you can provide the datasets preprocess if not too much trouble.

Regards.

Question w.r.t cosine_sim

Hi Yirong, thank you so much for sharing your code. I found torch.ger() is used to compute the cosine similarity between two vectors under the file 'layer.py'. However, this operation is used for obtaining the outer product. Should the inner product be used here?

Question w.r.t cosine_sim

Hi, thank you so much for sharing your code. I found torch.ger() is used to compute the cosine similarity between two vectors under the file 'layer.py'. However, this operation is used for obtaining the outer product. Should the inner product be used here?

ps. I closed an issue with the same question where I miss-typed your name. Sorry about that.

Help!

What is the content of the txt file in /home/wangyf/Project/sphereface/test/data/pairs.txt in the file lfw_eval.py?

Can you upload your model?

Hi @MuggleWang ,thank you for your work about the project. I don`t have sufficient resource to train a model. Can you share your pytorch model woth us ? Thank you any way!

CosFace Layer Accuracy

I understand the CosFace layer is applied to the feature vector of a given model. Then the output of cosface is then passed to the criterion (cross-entropy) in order to the loss.

However, how to calculate accuracy is I have a validation dataset that is a classification problem. I know that for LFW we use the features and cosine similarity but I want regular classification accuracy assuming both the train and val have the same classes. I tried using the output of cosface layer as the logits for accuracy metric since they have same shape as target/labels but that is leading to 0% accuracy even if the loss seems to be training properly and same problem with just a linear + cross-entropy give good results.

Basically, I want to use the output of cosface layer to calculate train and validation accuracy.

Also how to select s parameter if you have any thoughts.

why extract feature output so big float data?

i use my data to train the model , and use extractDeepFeature to generate face's feature, output like this:
tensor([-154825.8594, -159801.4688, 83979.3359, ..., 87386.3516,
-53073.5781, -129851.6797])
why the data is so big? not in [-1, 1]

TypeError: slice indices must be integers or None or have an __index__ method

After I run lfw_eval.py, there occurred this error:

Traceback (most recent call last):
  File "lfw_eval.py", line 122, in <module>
    _, result = eval(net.sphere().to('cuda'), model_path='checkpoint/CosFace_24_checkpoint.pth')
  File "lfw_eval.py", line 109, in eval
    folds = KFold(n=6000, n_folds=10)
  File "lfw_eval.py", line 38, in KFold
    test = base[i * n / n_folds:(i + 1) * n / n_folds]
TypeError: slice indices must be integers or None or have an __index__ method

I found that if I run the KFold() solely, there still occurs this error.

Request for comments by looking at log

Sairam.
I am not able to get accuracy as reported by you on LFW. I have done the pre-processing using MTCNN on casia as well as LFW. Can you please have look at the log below and give me some input.
Thanks.

$python main.py
Namespace(batch_size=512, classifier_type='MCP', cuda=True, database='WebFace', epochs=30, is_gray=False, log_interval=100, lr=0.1, momentum=0.9, network='sphere20', no
_cuda=False, num_class=10575, root_path='/data/darshan/DB/', save_path='checkpoint/', step_size=[16000, 24000], train_list='/data/darshan/DB/casiaalllist.txt', weight_d
ecay=0.0005, workers=4)
DataParallel(
(module): sphere(
(layer1): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): PReLU(num_parameters=64)
(2): Block(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu1): PReLU(num_parameters=64)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu2): PReLU(num_parameters=64)
)
)
(layer2): Sequential(
(0): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
[cos_face] 0:[tmux]* "ubuntu14" 11:35 13-Sep-19 (layer2): Sequential( [640/1758]
(0): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): PReLU(num_parameters=128)
(2): Block(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu1): PReLU(num_parameters=128)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu2): PReLU(num_parameters=128)
)
(3): Block(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu1): PReLU(num_parameters=128)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu2): PReLU(num_parameters=128)
)
)
(layer3): Sequential(
(0): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): PReLU(num_parameters=256)
(2): Block(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu1): PReLU(num_parameters=256)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu2): PReLU(num_parameters=256)
)
(3): Block(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu1): PReLU(num_parameters=256)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu2): PReLU(num_parameters=256)
)
(4): Block(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu1): PReLU(num_parameters=256)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu2): PReLU(num_parameters=256)
)
(5): Block(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu1): PReLU(num_parameters=256)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu2): PReLU(num_parameters=256)
[cos_face] 0:[tmux]* "ubuntu14" 11:36 13-Sep- (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) [600/1758]
(prelu2): PReLU(num_parameters=256)
)
)
(layer4): Sequential(
(0): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): PReLU(num_parameters=512)
(2): Block(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu1): PReLU(num_parameters=512)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(prelu2): PReLU(num_parameters=512)
)
)
(fc): Linear(in_features=21504, out_features=512, bias=True)
)
)

parameters: 22666944

length of train Database: 491542
Number of Identities: 10575

LFWACC=0.6500 std=0.0174 thd=0.6865
2019-09-12 13:15:39 Epoch 1 start training
2019-09-12 13:17:04 Train Epoch: 1 [51200/491542 (10%)]100, Loss: 24.172384, Elapsed time: 84.5849s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:18:25 Train Epoch: 1 [102400/491542 (21%)]200, Loss: 22.846836, Elapsed time: 80.9057s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:19:45 Train Epoch: 1 [153600/491542 (31%)]300, Loss: 22.458508, Elapsed time: 80.8179s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:21:06 Train Epoch: 1 [204800/491542 (42%)]400, Loss: 22.314773, Elapsed time: 80.5659s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:22:26 Train Epoch: 1 [256000/491542 (52%)]500, Loss: 22.082931, Elapsed time: 80.3091s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:23:46 Train Epoch: 1 [307200/491542 (62%)]600, Loss: 22.079480, Elapsed time: 80.1843s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:25:06 Train Epoch: 1 [358400/491542 (73%)]700, Loss: 21.915314, Elapsed time: 79.9633s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:26:26 Train Epoch: 1 [409600/491542 (83%)]800, Loss: 21.772166, Elapsed time: 79.9154s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:27:46 Train Epoch: 1 [460800/491542 (94%)]900, Loss: 21.630416, Elapsed time: 79.8305s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.7080 std=0.0194 thd=0.7615
LFWACC=0.7080 std=0.0194 thd=0.7615
2019-09-12 13:36:00 Epoch 2 start training
2019-09-12 13:37:19 Train Epoch: 2 [51200/491542 (10%)]1060, Loss: 21.314896, Elapsed time: 79.8643s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:38:40 Train Epoch: 2 [102400/491542 (21%)]1160, Loss: 21.099755, Elapsed time: 80.8930s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:40:01 Train Epoch: 2 [153600/491542 (31%)]1260, Loss: 20.972948, Elapsed time: 80.8493s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:41:22 Train Epoch: 2 [204800/491542 (42%)]1360, Loss: 20.820470, Elapsed time: 80.3503s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:42:42 Train Epoch: 2 [256000/491542 (52%)]1460, Loss: 20.727013, Elapsed time: 80.1202s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:44:01 Train Epoch: 2 [307200/491542 (62%)]1560, Loss: 20.647145, Elapsed time: 79.8386s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:45:21 Train Epoch: 2 [358400/491542 (73%)]1660, Loss: 20.586080, Elapsed time: 79.6027s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:46:41 Train Epoch: 2 [409600/491542 (83%)]1760, Loss: 20.498771, Elapsed time: 79.5510s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:48:00 Train Epoch: 2 [460800/491542 (94%)]1860, Loss: 20.324509, Elapsed time: 79.4748s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.7740 std=0.0187 thd=0.5950
LFWACC=0.7740 std=0.0187 thd=0.5950
2019-09-12 13:56:19 Epoch 3 start training
2019-09-12 13:57:38 Train Epoch: 3 [51200/491542 (10%)]2020, Loss: 20.033506, Elapsed time: 79.9399s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 13:58:59 Train Epoch: 3 [102400/491542 (21%)]2120, Loss: 19.954956, Elapsed time: 80.7293s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:00:20 Train Epoch: 3 [153600/491542 (31%)]2220, Loss: 19.852059, Elapsed time: 80.4635s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:01:40 Train Epoch: 3 [204800/491542 (42%)]2320, Loss: 19.753034, Elapsed time: 80.1708s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:03:00 Train Epoch: 3 [256000/491542 (52%)]2420, Loss: 19.594472, Elapsed time: 79.7852s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:04:19 Train Epoch: 3 [307200/491542 (62%)]2520, Loss: 19.508962, Elapsed time: 79.6771s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:05:39 Train Epoch: 3 [358400/491542 (73%)]2620, Loss: 19.383992, Elapsed time: 79.5918s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:06:58 Train Epoch: 3 [409600/491542 (83%)]2720, Loss: 19.195198, Elapsed time: 79.4339s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:08:18 Train Epoch: 3 [460800/491542 (94%)]2820, Loss: 19.079214, Elapsed time: 79.4876s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.8330 std=0.0132 thd=0.4690
LFWACC=0.8330 std=0.0132 thd=0.4690
2019-09-12 14:16:38 Epoch 4 start training
2019-09-12 14:17:58 Train Epoch: 4 [51200/491542 (10%)]2980, Loss: 18.570990, Elapsed time: 80.0371s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:19:19 Train Epoch: 4 [102400/491542 (21%)]3080, Loss: 18.522452, Elapsed time: 80.8673s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:17:58 Train Epoch: 4 [51200/491542 (10%)]2980, Loss: 18.570990, Elapsed time: 80.0371s(100 iters) Margin: 0.4000, Scale: 30.00 [284/1758]
2019-09-12 14:19:19 Train Epoch: 4 [102400/491542 (21%)]3080, Loss: 18.522452, Elapsed time: 80.8673s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:20:39 Train Epoch: 4 [153600/491542 (31%)]3180, Loss: 18.369998, Elapsed time: 80.4820s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:21:59 Train Epoch: 4 [204800/491542 (42%)]3280, Loss: 18.200751, Elapsed time: 80.2700s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:23:19 Train Epoch: 4 [256000/491542 (52%)]3380, Loss: 18.035500, Elapsed time: 79.8510s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:24:39 Train Epoch: 4 [307200/491542 (62%)]3480, Loss: 17.896541, Elapsed time: 79.7751s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:25:59 Train Epoch: 4 [358400/491542 (73%)]3580, Loss: 17.735517, Elapsed time: 79.5429s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:27:18 Train Epoch: 4 [409600/491542 (83%)]3680, Loss: 17.532498, Elapsed time: 79.6597s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:28:38 Train Epoch: 4 [460800/491542 (94%)]3780, Loss: 17.441054, Elapsed time: 79.4519s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.8937 std=0.0169 thd=0.4430
LFWACC=0.8937 std=0.0169 thd=0.4430
2019-09-12 14:36:56 Epoch 5 start training
2019-09-12 14:38:16 Train Epoch: 5 [51200/491542 (10%)]3940, Loss: 16.657547, Elapsed time: 79.9237s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:39:36 Train Epoch: 5 [102400/491542 (21%)]4040, Loss: 16.642810, Elapsed time: 80.8252s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:40:57 Train Epoch: 5 [153600/491542 (31%)]4140, Loss: 16.600412, Elapsed time: 80.6200s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:42:17 Train Epoch: 5 [204800/491542 (42%)]4240, Loss: 16.468722, Elapsed time: 80.3054s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:43:37 Train Epoch: 5 [256000/491542 (52%)]4340, Loss: 16.299980, Elapsed time: 80.0079s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:44:57 Train Epoch: 5 [307200/491542 (62%)]4440, Loss: 16.150490, Elapsed time: 79.7152s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:46:17 Train Epoch: 5 [358400/491542 (73%)]4540, Loss: 16.004787, Elapsed time: 79.9708s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:47:37 Train Epoch: 5 [409600/491542 (83%)]4640, Loss: 15.913374, Elapsed time: 79.8865s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:48:57 Train Epoch: 5 [460800/491542 (94%)]4740, Loss: 15.718520, Elapsed time: 79.8169s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9090 std=0.0119 thd=0.3500
LFWACC=0.9090 std=0.0119 thd=0.3500
2019-09-12 14:57:14 Epoch 6 start training
2019-09-12 14:58:34 Train Epoch: 6 [51200/491542 (10%)]4900, Loss: 15.032623, Elapsed time: 80.0282s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 14:59:55 Train Epoch: 6 [102400/491542 (21%)]5000, Loss: 15.079812, Elapsed time: 80.9940s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:01:15 Train Epoch: 6 [153600/491542 (31%)]5100, Loss: 15.039024, Elapsed time: 80.6035s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:02:36 Train Epoch: 6 [204800/491542 (42%)]5200, Loss: 14.958678, Elapsed time: 80.2624s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:03:56 Train Epoch: 6 [256000/491542 (52%)]5300, Loss: 14.931149, Elapsed time: 80.2024s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:05:16 Train Epoch: 6 [307200/491542 (62%)]5400, Loss: 14.723742, Elapsed time: 80.0598s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:06:36 Train Epoch: 6 [358400/491542 (73%)]5500, Loss: 14.680179, Elapsed time: 80.0743s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:07:56 Train Epoch: 6 [409600/491542 (83%)]5600, Loss: 14.585622, Elapsed time: 79.9089s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:09:16 Train Epoch: 6 [460800/491542 (94%)]5700, Loss: 14.453805, Elapsed time: 79.8399s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9353 std=0.0129 thd=0.3200
LFWACC=0.9353 std=0.0129 thd=0.3200
2019-09-12 15:17:36 Epoch 7 start training
2019-09-12 15:18:56 Train Epoch: 7 [51200/491542 (10%)]5860, Loss: 13.670297, Elapsed time: 80.1493s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:20:17 Train Epoch: 7 [102400/491542 (21%)]5960, Loss: 13.810384, Elapsed time: 81.0842s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:21:38 Train Epoch: 7 [153600/491542 (31%)]6060, Loss: 13.806558, Elapsed time: 80.7074s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:22:58 Train Epoch: 7 [204800/491542 (42%)]6160, Loss: 13.774162, Elapsed time: 80.3810s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:24:19 Train Epoch: 7 [256000/491542 (52%)]6260, Loss: 13.738605, Elapsed time: 80.2657s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:25:39 Train Epoch: 7 [307200/491542 (62%)]6360, Loss: 13.721251, Elapsed time: 80.1428s(100 iters) Margin: 0.4000, Scale: 30.00
[cos_face] 0:[tmux]* "ubuntu14" 11:37 13-Se2019-09-12 15:25:39 Train Epoch: 7 [307200/491542 (62%)]6360, Loss: 13.721251, Elapsed time: 80.1428s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:26:59 Train Epoch: 7 [358400/491542 (73%)]6460, Loss: 13.597935, Elapsed time: 80.0042s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:28:19 Train Epoch: 7 [409600/491542 (83%)]6560, Loss: 13.525435, Elapsed time: 80.0443s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:29:39 Train Epoch: 7 [460800/491542 (94%)]6660, Loss: 13.440919, Elapsed time: 80.0930s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9388 std=0.0132 thd=0.2760
LFWACC=0.9388 std=0.0132 thd=0.2760
2019-09-12 15:37:57 Epoch 8 start training
2019-09-12 15:39:17 Train Epoch: 8 [51200/491542 (10%)]6820, Loss: 12.654851, Elapsed time: 80.2228s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:40:38 Train Epoch: 8 [102400/491542 (21%)]6920, Loss: 12.725718, Elapsed time: 81.1743s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:41:59 Train Epoch: 8 [153600/491542 (31%)]7020, Loss: 12.824779, Elapsed time: 80.7800s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:43:20 Train Epoch: 8 [204800/491542 (42%)]7120, Loss: 12.815866, Elapsed time: 80.5148s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:44:40 Train Epoch: 8 [256000/491542 (52%)]7220, Loss: 12.856299, Elapsed time: 80.3110s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:46:00 Train Epoch: 8 [307200/491542 (62%)]7320, Loss: 12.806208, Elapsed time: 80.1582s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:47:20 Train Epoch: 8 [358400/491542 (73%)]7420, Loss: 12.700386, Elapsed time: 80.0838s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:48:40 Train Epoch: 8 [409600/491542 (83%)]7520, Loss: 12.679394, Elapsed time: 80.1831s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 15:50:00 Train Epoch: 8 [460800/491542 (94%)]7620, Loss: 12.646427, Elapsed time: 80.1249s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9443 std=0.0143 thd=0.2760
LFWACC=0.9443 std=0.0143 thd=0.2760
2019-09-12 15:58:22 Epoch 9 start training
2019-09-12 15:59:42 Train Epoch: 9 [51200/491542 (10%)]7780, Loss: 11.899336, Elapsed time: 80.1733s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:01:03 Train Epoch: 9 [102400/491542 (21%)]7880, Loss: 11.995637, Elapsed time: 80.9942s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:02:24 Train Epoch: 9 [153600/491542 (31%)]7980, Loss: 12.043702, Elapsed time: 80.6394s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:03:44 Train Epoch: 9 [204800/491542 (42%)]8080, Loss: 12.063579, Elapsed time: 80.3464s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:05:05 Train Epoch: 9 [256000/491542 (52%)]8180, Loss: 12.114215, Elapsed time: 80.2688s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:06:25 Train Epoch: 9 [307200/491542 (62%)]8280, Loss: 12.102033, Elapsed time: 80.1991s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:07:45 Train Epoch: 9 [358400/491542 (73%)]8380, Loss: 11.996697, Elapsed time: 80.0991s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:09:05 Train Epoch: 9 [409600/491542 (83%)]8480, Loss: 12.030120, Elapsed time: 79.8871s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:10:25 Train Epoch: 9 [460800/491542 (94%)]8580, Loss: 11.939158, Elapsed time: 79.9166s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9485 std=0.0120 thd=0.2550
LFWACC=0.9485 std=0.0120 thd=0.2550
2019-09-12 16:18:49 Epoch 10 start training
2019-09-12 16:20:49 Train Epoch: 10 [51200/491542 (10%)]8740, Loss: 11.147560, Elapsed time: 120.0822s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:22:11 Train Epoch: 10 [102400/491542 (21%)]8840, Loss: 11.314131, Elapsed time: 81.8089s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:23:32 Train Epoch: 10 [153600/491542 (31%)]8940, Loss: 11.367059, Elapsed time: 81.0666s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:24:52 Train Epoch: 10 [204800/491542 (42%)]9040, Loss: 11.366508, Elapsed time: 80.4465s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:26:13 Train Epoch: 10 [256000/491542 (52%)]9140, Loss: 11.494056, Elapsed time: 80.3031s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:27:33 Train Epoch: 10 [307200/491542 (62%)]9240, Loss: 11.443226, Elapsed time: 80.0928s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:28:53 Train Epoch: 10 [358400/491542 (73%)]9340, Loss: 11.480610, Elapsed time: 80.1754s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:30:13 Train Epoch: 10 [409600/491542 (83%)]9440, Loss: 11.371598, Elapsed time: 79.9209s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:31:33 Train Epoch: 10 [460800/491542 (94%)]9540, Loss: 11.354752, Elapsed time: 79.8002s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9528 std=0.0117 thd=0.2300
[cos_face] 0:[tmux]* "ubuntu14" 11:37 13-Sep-12019-09-12 16:31:33 Train Epoch: 10 [460800/491542 (94%)]9540, Loss: 11.354752, Elapsed time: 79.8002s(100 iters) Margin: 0.4000, Scale: 30.00 [204/1758]
LFWACC=0.9528 std=0.0117 thd=0.2300
LFWACC=0.9528 std=0.0117 thd=0.2300
2019-09-12 16:39:57 Epoch 11 start training
2019-09-12 16:41:18 Train Epoch: 11 [51200/491542 (10%)]9700, Loss: 10.587092, Elapsed time: 80.9694s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:42:39 Train Epoch: 11 [102400/491542 (21%)]9800, Loss: 10.764874, Elapsed time: 81.2108s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:44:00 Train Epoch: 11 [153600/491542 (31%)]9900, Loss: 10.839315, Elapsed time: 80.9648s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:45:21 Train Epoch: 11 [204800/491542 (42%)]10000, Loss: 10.854016, Elapsed time: 80.5961s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:46:42 Train Epoch: 11 [256000/491542 (52%)]10100, Loss: 10.931242, Elapsed time: 80.7728s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:48:02 Train Epoch: 11 [307200/491542 (62%)]10200, Loss: 10.880720, Elapsed time: 80.2172s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:49:22 Train Epoch: 11 [358400/491542 (73%)]10300, Loss: 10.935182, Elapsed time: 80.1392s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:50:43 Train Epoch: 11 [409600/491542 (83%)]10400, Loss: 10.917282, Elapsed time: 81.0105s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 16:52:04 Train Epoch: 11 [460800/491542 (94%)]10500, Loss: 10.867472, Elapsed time: 81.5648s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9565 std=0.0118 thd=0.2450
LFWACC=0.9565 std=0.0118 thd=0.2450
2019-09-12 17:00:27 Epoch 12 start training
2019-09-12 17:01:47 Train Epoch: 12 [51200/491542 (10%)]10660, Loss: 10.109481, Elapsed time: 80.4942s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:03:09 Train Epoch: 12 [102400/491542 (21%)]10760, Loss: 10.340177, Elapsed time: 81.4375s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:04:30 Train Epoch: 12 [153600/491542 (31%)]10860, Loss: 10.307191, Elapsed time: 81.3402s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:05:51 Train Epoch: 12 [204800/491542 (42%)]10960, Loss: 10.423696, Elapsed time: 81.3251s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:07:13 Train Epoch: 12 [256000/491542 (52%)]11060, Loss: 10.550815, Elapsed time: 81.0702s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:08:33 Train Epoch: 12 [307200/491542 (62%)]11160, Loss: 10.521151, Elapsed time: 80.7824s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:09:54 Train Epoch: 12 [358400/491542 (73%)]11260, Loss: 10.463209, Elapsed time: 80.8619s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:11:15 Train Epoch: 12 [409600/491542 (83%)]11360, Loss: 10.521783, Elapsed time: 80.5978s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:12:35 Train Epoch: 12 [460800/491542 (94%)]11460, Loss: 10.477023, Elapsed time: 80.3846s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9587 std=0.0109 thd=0.2000
LFWACC=0.9587 std=0.0109 thd=0.2000
2019-09-12 17:21:28 Epoch 13 start training
2019-09-12 17:22:59 Train Epoch: 13 [51200/491542 (10%)]11620, Loss: 9.660328, Elapsed time: 90.6181s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:24:30 Train Epoch: 13 [102400/491542 (21%)]11720, Loss: 9.911855, Elapsed time: 91.1234s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:26:00 Train Epoch: 13 [153600/491542 (31%)]11820, Loss: 10.020755, Elapsed time: 90.0579s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:27:29 Train Epoch: 13 [204800/491542 (42%)]11920, Loss: 10.099673, Elapsed time: 89.4470s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:28:59 Train Epoch: 13 [256000/491542 (52%)]12020, Loss: 10.136460, Elapsed time: 89.3758s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:30:28 Train Epoch: 13 [307200/491542 (62%)]12120, Loss: 10.116849, Elapsed time: 89.0912s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:31:57 Train Epoch: 13 [358400/491542 (73%)]12220, Loss: 10.142291, Elapsed time: 88.9141s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:33:26 Train Epoch: 13 [409600/491542 (83%)]12320, Loss: 10.122780, Elapsed time: 89.0122s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:34:54 Train Epoch: 13 [460800/491542 (94%)]12420, Loss: 10.157972, Elapsed time: 88.6966s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9560 std=0.0093 thd=0.2150
LFWACC=0.9560 std=0.0093 thd=0.2150
2019-09-12 17:44:44 Epoch 14 start training
2019-09-12 17:46:04 Train Epoch: 14 [51200/491542 (10%)]12580, Loss: 9.303706, Elapsed time: 80.4042s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:47:27 Train Epoch: 14 [102400/491542 (21%)]12680, Loss: 9.530977, Elapsed time: 82.0868s(100 iters) Margin: 0.4000, Scale: 30.00
[cos_face] 0:[tmux]* "ubuntu14" 11:38 13-Sep-2019-09-12 17:46:04 Train Epoch: 14 [51200/491542 (10%)]12580, Loss: 9.303706, Elapsed time: 80.4042s(100 iters) Margin: 0.4000, Scale: 30.00 [164/1758]
2019-09-12 17:47:27 Train Epoch: 14 [102400/491542 (21%)]12680, Loss: 9.530977, Elapsed time: 82.0868s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:48:47 Train Epoch: 14 [153600/491542 (31%)]12780, Loss: 9.697279, Elapsed time: 80.4778s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:50:07 Train Epoch: 14 [204800/491542 (42%)]12880, Loss: 9.759717, Elapsed time: 80.4406s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:52:10 Train Epoch: 14 [256000/491542 (52%)]12980, Loss: 9.825520, Elapsed time: 122.4972s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:54:30 Train Epoch: 14 [307200/491542 (62%)]13080, Loss: 9.868352, Elapsed time: 140.0892s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:56:32 Train Epoch: 14 [358400/491542 (73%)]13180, Loss: 9.864648, Elapsed time: 121.8854s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 17:57:52 Train Epoch: 14 [409600/491542 (83%)]13280, Loss: 9.814745, Elapsed time: 79.9319s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 18:00:06 Train Epoch: 14 [460800/491542 (94%)]13380, Loss: 9.853338, Elapsed time: 134.3097s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9615 std=0.0103 thd=0.1820
LFWACC=0.9615 std=0.0103 thd=0.1820
2019-09-12 18:11:40 Epoch 15 start training
2019-09-12 18:14:00 Train Epoch: 15 [51200/491542 (10%)]13540, Loss: 8.961409, Elapsed time: 139.3340s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 18:16:18 Train Epoch: 15 [102400/491542 (21%)]13640, Loss: 9.218532, Elapsed time: 137.9664s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 18:18:37 Train Epoch: 15 [153600/491542 (31%)]13740, Loss: 9.383707, Elapsed time: 139.0893s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 18:20:56 Train Epoch: 15 [204800/491542 (42%)]13840, Loss: 9.420286, Elapsed time: 139.2454s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 18:23:15 Train Epoch: 15 [256000/491542 (52%)]13940, Loss: 9.490800, Elapsed time: 138.8863s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 18:25:34 Train Epoch: 15 [307200/491542 (62%)]14040, Loss: 9.481273, Elapsed time: 139.3663s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 18:27:54 Train Epoch: 15 [358400/491542 (73%)]14140, Loss: 9.592036, Elapsed time: 139.2563s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 18:30:13 Train Epoch: 15 [409600/491542 (83%)]14240, Loss: 9.610366, Elapsed time: 139.1816s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 18:32:32 Train Epoch: 15 [460800/491542 (94%)]14340, Loss: 9.677226, Elapsed time: 139.1846s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9625 std=0.0089 thd=0.1995
LFWACC=0.9625 std=0.0089 thd=0.1995
2019-09-12 18:44:04 Epoch 16 start training
2019-09-12 18:46:23 Train Epoch: 16 [51200/491542 (10%)]14500, Loss: 8.704179, Elapsed time: 138.4071s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 18:48:41 Train Epoch: 16 [102400/491542 (21%)]14600, Loss: 8.906614, Elapsed time: 138.2743s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 18:50:59 Train Epoch: 16 [153600/491542 (31%)]14700, Loss: 9.111327, Elapsed time: 138.0615s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 18:53:17 Train Epoch: 16 [204800/491542 (42%)]14800, Loss: 9.253497, Elapsed time: 138.1947s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 18:55:36 Train Epoch: 16 [256000/491542 (52%)]14900, Loss: 9.267926, Elapsed time: 138.5134s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 18:57:54 Train Epoch: 16 [307200/491542 (62%)]15000, Loss: 9.278756, Elapsed time: 138.0476s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 19:00:12 Train Epoch: 16 [358400/491542 (73%)]15100, Loss: 9.360255, Elapsed time: 138.1782s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 19:02:31 Train Epoch: 16 [409600/491542 (83%)]15200, Loss: 9.376236, Elapsed time: 138.8230s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 19:04:50 Train Epoch: 16 [460800/491542 (94%)]15300, Loss: 9.364449, Elapsed time: 138.7523s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9648 std=0.0104 thd=0.1755
LFWACC=0.9648 std=0.0104 thd=0.1755
2019-09-12 19:16:24 Epoch 17 start training
2019-09-12 19:18:43 Train Epoch: 17 [51200/491542 (10%)]15460, Loss: 8.457375, Elapsed time: 139.0128s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 19:21:02 Train Epoch: 17 [102400/491542 (21%)]15560, Loss: 8.654380, Elapsed time: 138.7945s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 19:23:20 Train Epoch: 17 [153600/491542 (31%)]15660, Loss: 8.801086, Elapsed time: 138.4608s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 19:25:39 Train Epoch: 17 [204800/491542 (42%)]15760, Loss: 8.969566, Elapsed time: 138.4061s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 19:27:57 Train Epoch: 17 [256000/491542 (52%)]15860, Loss: 9.039080, Elapsed time: 138.9734s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 19:30:16 Train Epoch: 17 [307200/491542 (62%)]15960, Loss: 9.156300, Elapsed time: 138.8538s(100 iters) Margin: 0.4000, Scale: 30.00
[cos_face] 0:[tmux]* "ubuntu14" 11:38 13-Sep-12019-09-12 19:27:57 Train Epoch: 17 [256000/491542 (52%)]15860, Loss: 9.039080, Elapsed time: 138.9734s(100 iters) Margin: 0.4000, Scale: 30.00 [124/1758]
2019-09-12 19:30:16 Train Epoch: 17 [307200/491542 (62%)]15960, Loss: 9.156300, Elapsed time: 138.8538s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 19:31:10 Adjust learning rate to 0.010000000000000002
2019-09-12 19:32:35 Train Epoch: 17 [358400/491542 (73%)]16060, Loss: 8.584494, Elapsed time: 138.8004s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 19:34:56 Train Epoch: 17 [409600/491542 (83%)]16160, Loss: 7.676757, Elapsed time: 141.2116s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 19:37:19 Train Epoch: 17 [460800/491542 (94%)]16260, Loss: 7.547308, Elapsed time: 142.1472s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9728 std=0.0076 thd=0.1745
LFWACC=0.9728 std=0.0076 thd=0.1745
2019-09-12 19:49:00 Epoch 18 start training
2019-09-12 19:51:24 Train Epoch: 18 [51200/491542 (10%)]16420, Loss: 6.340328, Elapsed time: 143.2423s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 19:53:46 Train Epoch: 18 [102400/491542 (21%)]16520, Loss: 6.305048, Elapsed time: 142.3993s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 19:56:09 Train Epoch: 18 [153600/491542 (31%)]16620, Loss: 6.281746, Elapsed time: 143.0036s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 19:58:32 Train Epoch: 18 [204800/491542 (42%)]16720, Loss: 6.233912, Elapsed time: 143.0438s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:00:55 Train Epoch: 18 [256000/491542 (52%)]16820, Loss: 6.212523, Elapsed time: 143.0783s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:03:18 Train Epoch: 18 [307200/491542 (62%)]16920, Loss: 6.205864, Elapsed time: 142.9269s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:05:41 Train Epoch: 18 [358400/491542 (73%)]17020, Loss: 6.180695, Elapsed time: 142.6599s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:08:04 Train Epoch: 18 [409600/491542 (83%)]17120, Loss: 6.115624, Elapsed time: 143.1000s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:10:26 Train Epoch: 18 [460800/491542 (94%)]17220, Loss: 6.090673, Elapsed time: 142.6650s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9732 std=0.0079 thd=0.1575
LFWACC=0.9732 std=0.0079 thd=0.1575
2019-09-12 20:21:01 Epoch 19 start training
2019-09-12 20:22:32 Train Epoch: 19 [51200/491542 (10%)]17380, Loss: 5.735888, Elapsed time: 91.2596s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:24:04 Train Epoch: 19 [102400/491542 (21%)]17480, Loss: 5.748965, Elapsed time: 91.6366s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:25:30 Train Epoch: 19 [153600/491542 (31%)]17580, Loss: 5.732581, Elapsed time: 86.2912s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:27:21 Train Epoch: 19 [204800/491542 (42%)]17680, Loss: 5.751289, Elapsed time: 111.0619s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:29:44 Train Epoch: 19 [256000/491542 (52%)]17780, Loss: 5.728878, Elapsed time: 142.7668s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:32:06 Train Epoch: 19 [307200/491542 (62%)]17880, Loss: 5.736048, Elapsed time: 142.4956s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:34:29 Train Epoch: 19 [358400/491542 (73%)]17980, Loss: 5.762702, Elapsed time: 142.4277s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:36:51 Train Epoch: 19 [409600/491542 (83%)]18080, Loss: 5.718217, Elapsed time: 142.6604s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:39:14 Train Epoch: 19 [460800/491542 (94%)]18180, Loss: 5.755525, Elapsed time: 142.6548s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9733 std=0.0083 thd=0.1710
LFWACC=0.9733 std=0.0083 thd=0.1710
2019-09-12 20:50:56 Epoch 20 start training
2019-09-12 20:53:05 Train Epoch: 20 [51200/491542 (10%)]18340, Loss: 5.332196, Elapsed time: 129.1307s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:54:27 Train Epoch: 20 [102400/491542 (21%)]18440, Loss: 5.423336, Elapsed time: 81.4300s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:55:48 Train Epoch: 20 [153600/491542 (31%)]18540, Loss: 5.421982, Elapsed time: 81.4570s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:57:10 Train Epoch: 20 [204800/491542 (42%)]18640, Loss: 5.471625, Elapsed time: 81.5223s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:58:31 Train Epoch: 20 [256000/491542 (52%)]18740, Loss: 5.446765, Elapsed time: 81.4726s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 20:59:53 Train Epoch: 20 [307200/491542 (62%)]18840, Loss: 5.490581, Elapsed time: 81.4594s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 21:01:14 Train Epoch: 20 [358400/491542 (73%)]18940, Loss: 5.508192, Elapsed time: 81.5665s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 21:02:38 Train Epoch: 20 [409600/491542 (83%)]19040, Loss: 5.529732, Elapsed time: 83.1760s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 21:03:59 Train Epoch: 20 [460800/491542 (94%)]19140, Loss: 5.509339, Elapsed time: 81.3921s(100 iters) Margin: 0.4000, Scale: 30.00
[cos_face] 0:[tmux]* "ubuntu14" 11:38 13-Sep-2019-09-12 21:02:38 Train Epoch: 20 [409600/491542 (83%)]19040, Loss: 5.529732, Elapsed time: 83.1760s(100 iters) Margin: 0.4000, Scale: 30.00 [84/1758]
2019-09-12 21:03:59 Train Epoch: 20 [460800/491542 (94%)]19140, Loss: 5.509339, Elapsed time: 81.3921s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9723 std=0.0081 thd=0.1550
LFWACC=0.9723 std=0.0081 thd=0.1550
2019-09-12 21:15:50 Epoch 21 start training
2019-09-12 21:18:24 Train Epoch: 21 [51200/491542 (10%)]19300, Loss: 5.137030, Elapsed time: 153.8591s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 21:20:57 Train Epoch: 21 [102400/491542 (21%)]19400, Loss: 5.186476, Elapsed time: 153.3372s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 21:23:30 Train Epoch: 21 [153600/491542 (31%)]19500, Loss: 5.189737, Elapsed time: 153.1752s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 21:26:04 Train Epoch: 21 [204800/491542 (42%)]19600, Loss: 5.263559, Elapsed time: 153.7611s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 21:28:37 Train Epoch: 21 [256000/491542 (52%)]19700, Loss: 5.278027, Elapsed time: 153.1694s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 21:31:11 Train Epoch: 21 [307200/491542 (62%)]19800, Loss: 5.265022, Elapsed time: 154.1127s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 21:33:45 Train Epoch: 21 [358400/491542 (73%)]19900, Loss: 5.341223, Elapsed time: 153.9098s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 21:36:19 Train Epoch: 21 [409600/491542 (83%)]20000, Loss: 5.330581, Elapsed time: 153.7918s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 21:38:53 Train Epoch: 21 [460800/491542 (94%)]20100, Loss: 5.322570, Elapsed time: 153.4423s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9737 std=0.0084 thd=0.1685
LFWACC=0.9737 std=0.0084 thd=0.1685
2019-09-12 21:51:15 Epoch 22 start training
2019-09-12 21:53:49 Train Epoch: 22 [51200/491542 (10%)]20260, Loss: 4.903748, Elapsed time: 154.0915s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 21:56:23 Train Epoch: 22 [102400/491542 (21%)]20360, Loss: 4.999941, Elapsed time: 153.8018s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 21:58:56 Train Epoch: 22 [153600/491542 (31%)]20460, Loss: 5.057455, Elapsed time: 153.5426s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 22:01:30 Train Epoch: 22 [204800/491542 (42%)]20560, Loss: 5.065055, Elapsed time: 153.7150s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 22:04:02 Train Epoch: 22 [256000/491542 (52%)]20660, Loss: 5.066722, Elapsed time: 152.5070s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 22:06:34 Train Epoch: 22 [307200/491542 (62%)]20760, Loss: 5.127178, Elapsed time: 151.7778s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 22:09:05 Train Epoch: 22 [358400/491542 (73%)]20860, Loss: 5.162346, Elapsed time: 151.2084s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 22:11:37 Train Epoch: 22 [409600/491542 (83%)]20960, Loss: 5.238491, Elapsed time: 151.7813s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 22:14:09 Train Epoch: 22 [460800/491542 (94%)]21060, Loss: 5.238093, Elapsed time: 152.1675s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9733 std=0.0063 thd=0.1590
LFWACC=0.9733 std=0.0063 thd=0.1590
2019-09-12 22:28:00 Epoch 23 start training
2019-09-12 22:30:34 Train Epoch: 23 [51200/491542 (10%)]21220, Loss: 4.754485, Elapsed time: 153.9534s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 22:33:06 Train Epoch: 23 [102400/491542 (21%)]21320, Loss: 4.815938, Elapsed time: 152.1592s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 22:35:38 Train Epoch: 23 [153600/491542 (31%)]21420, Loss: 4.894540, Elapsed time: 151.8319s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 22:38:10 Train Epoch: 23 [204800/491542 (42%)]21520, Loss: 4.925280, Elapsed time: 151.6614s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 22:40:41 Train Epoch: 23 [256000/491542 (52%)]21620, Loss: 4.971471, Elapsed time: 151.7486s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 22:43:13 Train Epoch: 23 [307200/491542 (62%)]21720, Loss: 5.048486, Elapsed time: 151.6972s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 22:45:45 Train Epoch: 23 [358400/491542 (73%)]21820, Loss: 5.066857, Elapsed time: 151.4015s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 22:48:16 Train Epoch: 23 [409600/491542 (83%)]21920, Loss: 5.072680, Elapsed time: 151.4893s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 22:50:49 Train Epoch: 23 [460800/491542 (94%)]22020, Loss: 5.095682, Elapsed time: 152.5271s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9718 std=0.0088 thd=0.1690
LFWACC=0.9718 std=0.0088 thd=0.1690
2019-09-12 23:03:09 Epoch 24 start training
2019-09-12 23:05:44 Train Epoch: 24 [51200/491542 (10%)]22180, Loss: 4.647548, Elapsed time: 154.4657s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:05:44 Train Epoch: 24 [51200/491542 (10%)]22180, Loss: 4.647548, Elapsed time: 154.4657s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:08:18 Train Epoch: 24 [102400/491542 (21%)]22280, Loss: 4.668909, Elapsed time: 153.8059s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:10:10 Train Epoch: 24 [153600/491542 (31%)]22380, Loss: 4.726871, Elapsed time: 112.4473s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:11:42 Train Epoch: 24 [204800/491542 (42%)]22480, Loss: 4.840051, Elapsed time: 91.7547s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:13:14 Train Epoch: 24 [256000/491542 (52%)]22580, Loss: 4.846678, Elapsed time: 91.7453s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:14:45 Train Epoch: 24 [307200/491542 (62%)]22680, Loss: 4.872208, Elapsed time: 91.4130s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:16:17 Train Epoch: 24 [358400/491542 (73%)]22780, Loss: 4.950935, Elapsed time: 91.8960s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:17:48 Train Epoch: 24 [409600/491542 (83%)]22880, Loss: 5.003627, Elapsed time: 91.2022s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:19:20 Train Epoch: 24 [460800/491542 (94%)]22980, Loss: 5.006716, Elapsed time: 91.9650s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9708 std=0.0074 thd=0.1595
LFWACC=0.9708 std=0.0074 thd=0.1595
2019-09-12 23:28:44 Epoch 25 start training
2019-09-12 23:31:19 Train Epoch: 25 [51200/491542 (10%)]23140, Loss: 4.488079, Elapsed time: 154.7376s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:33:52 Train Epoch: 25 [102400/491542 (21%)]23240, Loss: 4.588842, Elapsed time: 153.9628s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:36:27 Train Epoch: 25 [153600/491542 (31%)]23340, Loss: 4.649957, Elapsed time: 154.3383s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:39:01 Train Epoch: 25 [204800/491542 (42%)]23440, Loss: 4.708882, Elapsed time: 154.3972s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:41:35 Train Epoch: 25 [256000/491542 (52%)]23540, Loss: 4.743938, Elapsed time: 154.0753s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:44:10 Train Epoch: 25 [307200/491542 (62%)]23640, Loss: 4.839835, Elapsed time: 154.2195s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:46:44 Train Epoch: 25 [358400/491542 (73%)]23740, Loss: 4.824670, Elapsed time: 154.4489s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:49:18 Train Epoch: 25 [409600/491542 (83%)]23840, Loss: 4.870885, Elapsed time: 154.1075s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:51:52 Train Epoch: 25 [460800/491542 (94%)]23940, Loss: 4.937899, Elapsed time: 154.1818s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-12 23:53:23 Adjust learning rate to 0.0010000000000000002
LFWACC=0.9750 std=0.0070 thd=0.1550
LFWACC=0.9750 std=0.0070 thd=0.1550
2019-09-13 00:04:14 Epoch 26 start training
2019-09-13 00:06:48 Train Epoch: 26 [51200/491542 (10%)]24100, Loss: 4.170926, Elapsed time: 154.5220s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 00:09:22 Train Epoch: 26 [102400/491542 (21%)]24200, Loss: 4.112605, Elapsed time: 154.1150s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 00:11:57 Train Epoch: 26 [153600/491542 (31%)]24300, Loss: 4.101389, Elapsed time: 154.3511s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 00:14:31 Train Epoch: 26 [204800/491542 (42%)]24400, Loss: 4.087288, Elapsed time: 154.4437s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 00:17:05 Train Epoch: 26 [256000/491542 (52%)]24500, Loss: 4.066775, Elapsed time: 154.2402s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 00:19:39 Train Epoch: 26 [307200/491542 (62%)]24600, Loss: 4.089148, Elapsed time: 154.0633s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 00:22:14 Train Epoch: 26 [358400/491542 (73%)]24700, Loss: 4.059531, Elapsed time: 154.3003s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 00:24:48 Train Epoch: 26 [409600/491542 (83%)]24800, Loss: 4.069454, Elapsed time: 154.3140s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 00:27:22 Train Epoch: 26 [460800/491542 (94%)]24900, Loss: 4.026198, Elapsed time: 154.2076s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9750 std=0.0072 thd=0.1500
LFWACC=0.9750 std=0.0072 thd=0.1500
2019-09-13 00:39:43 Epoch 27 start training
2019-09-13 00:42:17 Train Epoch: 27 [51200/491542 (10%)]25060, Loss: 3.961772, Elapsed time: 154.5979s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 00:44:52 Train Epoch: 27 [102400/491542 (21%)]25160, Loss: 3.983278, Elapsed time: 154.3640s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 00:47:26 Train Epoch: 27 [153600/491542 (31%)]25260, Loss: 3.975024, Elapsed time: 154.1788s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 00:50:00 Train Epoch: 27 [204800/491542 (42%)]25360, Loss: 4.009512, Elapsed time: 154.3001s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 00:50:00 Train Epoch: 27 [204800/491542 (42%)]25360, Loss: 4.009512, Elapsed time: 154.3001s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 00:52:35 Train Epoch: 27 [256000/491542 (52%)]25460, Loss: 3.983597, Elapsed time: 154.4772s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 00:55:09 Train Epoch: 27 [307200/491542 (62%)]25560, Loss: 3.968303, Elapsed time: 153.9256s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 00:57:43 Train Epoch: 27 [358400/491542 (73%)]25660, Loss: 4.008958, Elapsed time: 154.0790s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 01:00:17 Train Epoch: 27 [409600/491542 (83%)]25760, Loss: 4.004559, Elapsed time: 154.0410s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 01:02:51 Train Epoch: 27 [460800/491542 (94%)]25860, Loss: 4.039113, Elapsed time: 154.3634s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9727 std=0.0068 thd=0.1475
LFWACC=0.9727 std=0.0068 thd=0.1475
2019-09-13 01:15:13 Epoch 28 start training
2019-09-13 01:17:48 Train Epoch: 28 [51200/491542 (10%)]26020, Loss: 3.872343, Elapsed time: 154.7083s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 01:20:22 Train Epoch: 28 [102400/491542 (21%)]26120, Loss: 3.911179, Elapsed time: 154.2545s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 01:22:57 Train Epoch: 28 [153600/491542 (31%)]26220, Loss: 3.957769, Elapsed time: 154.4300s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 01:25:31 Train Epoch: 28 [204800/491542 (42%)]26320, Loss: 3.939793, Elapsed time: 154.0173s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 01:28:05 Train Epoch: 28 [256000/491542 (52%)]26420, Loss: 3.969566, Elapsed time: 154.2614s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 01:30:40 Train Epoch: 28 [307200/491542 (62%)]26520, Loss: 3.945882, Elapsed time: 154.6984s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 01:33:14 Train Epoch: 28 [358400/491542 (73%)]26620, Loss: 3.970112, Elapsed time: 154.1910s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 01:35:48 Train Epoch: 28 [409600/491542 (83%)]26720, Loss: 3.960654, Elapsed time: 154.3420s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 01:38:22 Train Epoch: 28 [460800/491542 (94%)]26820, Loss: 3.996202, Elapsed time: 154.2865s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9725 std=0.0076 thd=0.1590
LFWACC=0.9725 std=0.0076 thd=0.1590
2019-09-13 01:48:37 Epoch 29 start training
2019-09-13 01:50:04 Train Epoch: 29 [51200/491542 (10%)]26980, Loss: 3.902227, Elapsed time: 87.2243s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 01:51:51 Train Epoch: 29 [102400/491542 (21%)]27080, Loss: 3.905785, Elapsed time: 106.5435s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 01:54:24 Train Epoch: 29 [153600/491542 (31%)]27180, Loss: 3.861100, Elapsed time: 153.7836s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 01:56:59 Train Epoch: 29 [204800/491542 (42%)]27280, Loss: 3.898150, Elapsed time: 154.3326s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 01:59:33 Train Epoch: 29 [256000/491542 (52%)]27380, Loss: 3.930506, Elapsed time: 153.9139s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 02:02:06 Train Epoch: 29 [307200/491542 (62%)]27480, Loss: 3.902833, Elapsed time: 153.6785s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 02:04:40 Train Epoch: 29 [358400/491542 (73%)]27580, Loss: 3.919024, Elapsed time: 153.8985s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 02:07:14 Train Epoch: 29 [409600/491542 (83%)]27680, Loss: 3.937219, Elapsed time: 154.0989s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 02:09:48 Train Epoch: 29 [460800/491542 (94%)]27780, Loss: 3.964422, Elapsed time: 153.8046s(100 iters) Margin: 0.4000, Scale: 30.00
LFWACC=0.9725 std=0.0072 thd=0.1540
LFWACC=0.9725 std=0.0072 thd=0.1540
2019-09-13 02:22:13 Epoch 30 start training
2019-09-13 02:24:48 Train Epoch: 30 [51200/491542 (10%)]27940, Loss: 3.839538, Elapsed time: 154.7254s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 02:27:23 Train Epoch: 30 [102400/491542 (21%)]28040, Loss: 3.907460, Elapsed time: 154.5071s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 02:29:57 Train Epoch: 30 [153600/491542 (31%)]28140, Loss: 3.853016, Elapsed time: 154.3258s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 02:32:31 Train Epoch: 30 [204800/491542 (42%)]28240, Loss: 3.874325, Elapsed time: 154.2081s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 02:35:05 Train Epoch: 30 [256000/491542 (52%)]28340, Loss: 3.877134, Elapsed time: 154.0853s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 02:37:40 Train Epoch: 30 [307200/491542 (62%)]28440, Loss: 3.901800, Elapsed time: 154.4236s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 02:40:14 Train Epoch: 30 [358400/491542 (73%)]28540, Loss: 3.864615, Elapsed time: 154.2500s(100 iters) Margin: 0.4000, Scale: 30.00
2019-09-13 02:42:48 Train Epoch: 30 [409600/491542 (83%)]28640, Loss: 3.875914, Elapsed time: 154.2236s(100 iters) Margin: 0.4000, Scale: 30.00
Finished Training

RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x131072 and 21504x512

After I run python lfw_eval.py, there occurs this error:

Traceback (most recent call last):
  File "lfw_eval.py", line 122, in <module>
    _, result = eval(net.sphere().to('cuda'), model_path='checkpoint/CosFace_24_checkpoint.pth')
  File "lfw_eval.py", line 101, in eval
    f1 = extractDeepFeature(img1, model, is_gray)
  File "lfw_eval.py", line 30, in extractDeepFeature
    ft = torch.cat((model(img), model(img_)), 1)[0].to('cpu')
  File "/miniconda3/cosface/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/code/CosFace_pytorch/net.py", line 66, in forward
    x = self.fc(x)
  File "/miniconda3/cosface/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/miniconda3/cosface/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 103, in forward
    return F.linear(input, self.weight, self.bias)
  File "/miniconda3/cosface/lib/python3.8/site-packages/torch/nn/functional.py", line 1848, in linear
    return torch._C._nn.linear(input, weight, bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x131072 and 21504x512)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.