Demo for human-body works though, just the hand inference does not seem to work.
021-05-28 14:43:06,634 METRO Inference INFO: Using 1 GPUs
/home/wd-nebojsa-b/miniconda3/envs/metro/lib/python3.7/site-packages/manopth/manolayer.py:72: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
torch.Tensor(smpl_data['betas'].r).unsqueeze(0))
/home/wd-nebojsa-b/miniconda3/envs/metro/lib/python3.7/site-packages/scipy/sparse/_index.py:84: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
self._set_intXint(row, col, x.flat[0])
/home/wd-nebojsa-b/miniconda3/envs/metro/lib/python3.7/site-packages/scipy/sparse/_index.py:84: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
self._set_intXint(row, col, x.flat[0])
/home/wd-nebojsa-b/miniconda3/envs/metro/lib/python3.7/site-packages/scipy/sparse/_index.py:84: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
self._set_intXint(row, col, x.flat[0])
/home/wd-nebojsa-b/miniconda3/envs/metro/lib/python3.7/site-packages/scipy/sparse/_index.py:84: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
self._set_intXint(row, col, x.flat[0])
/home/wd-nebojsa-b/miniconda3/envs/metro/lib/python3.7/site-packages/scipy/sparse/_index.py:84: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
self._set_intXint(row, col, x.flat[0])
2021-05-28 14:43:08,726 METRO Inference INFO: Inference: Loading from checkpoint ./models/metro_release/metro_hand_state_dict.bin
2021-05-28 14:43:08,726 METRO Inference INFO: Update config parameter num_hidden_layers: 12 -> 4
2021-05-28 14:43:08,726 METRO Inference INFO: Update config parameter hidden_size: 768 -> 1024
2021-05-28 14:43:08,726 METRO Inference INFO: Update config parameter num_attention_heads: 12 -> 4
2021-05-28 14:43:08,726 METRO Inference INFO: Update config parameter intermediate_size: 3072 -> 4096
2021-05-28 14:43:10,284 METRO Inference INFO: Init model from scratch.
2021-05-28 14:43:10,285 METRO Inference INFO: Update config parameter num_hidden_layers: 12 -> 4
2021-05-28 14:43:10,285 METRO Inference INFO: Update config parameter hidden_size: 768 -> 256
2021-05-28 14:43:10,285 METRO Inference INFO: Update config parameter num_attention_heads: 12 -> 4
2021-05-28 14:43:10,285 METRO Inference INFO: Update config parameter intermediate_size: 3072 -> 1024
2021-05-28 14:43:10,508 METRO Inference INFO: Init model from scratch.
2021-05-28 14:43:10,508 METRO Inference INFO: Update config parameter num_hidden_layers: 12 -> 4
2021-05-28 14:43:10,508 METRO Inference INFO: Update config parameter hidden_size: 768 -> 64
2021-05-28 14:43:10,508 METRO Inference INFO: Update config parameter num_attention_heads: 12 -> 4
2021-05-28 14:43:10,508 METRO Inference INFO: Update config parameter intermediate_size: 3072 -> 256
2021-05-28 14:43:10,556 METRO Inference INFO: Init model from scratch.
=> loading pretrained model models/hrnet/hrnetv2_w64_imagenet_pretrained.pth
2021-05-28 14:43:13,671 METRO Inference INFO: => loading hrnet-v2-w64 model
2021-05-28 14:43:13,673 METRO Inference INFO: Transformers total parameters: 101182022
2021-05-28 14:43:13,679 METRO Inference INFO: Backbone total parameters: 128059944
2021-05-28 14:43:13,680 METRO Inference INFO: Loading state dict from checkpoint ./models/metro_release/metro_hand_state_dict.bin
Traceback (most recent call last):
File "./metro/tools/end2end_inference_handmesh.py", line 312, in
main(args)
File "./metro/tools/end2end_inference_handmesh.py", line 279, in main
_metro_network.load_state_dict(state_dict, strict=False)
File "/home/wd-nebojsa-b/miniconda3/envs/metro/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1052, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for METRO_Hand_Network:
size mismatch for trans_encoder.0.bert.encoder.layer.0.intermediate.dense.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for trans_encoder.0.bert.encoder.layer.0.intermediate.dense.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for trans_encoder.0.bert.encoder.layer.0.output.dense.weight: copying a param with shape torch.Size([1024, 3072]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for trans_encoder.0.bert.encoder.layer.1.intermediate.dense.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for trans_encoder.0.bert.encoder.layer.1.intermediate.dense.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for trans_encoder.0.bert.encoder.layer.1.output.dense.weight: copying a param with shape torch.Size([1024, 3072]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for trans_encoder.0.bert.encoder.layer.2.intermediate.dense.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for trans_encoder.0.bert.encoder.layer.2.intermediate.dense.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for trans_encoder.0.bert.encoder.layer.2.output.dense.weight: copying a param with shape torch.Size([1024, 3072]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for trans_encoder.0.bert.encoder.layer.3.intermediate.dense.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for trans_encoder.0.bert.encoder.layer.3.intermediate.dense.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for trans_encoder.0.bert.encoder.layer.3.output.dense.weight: copying a param with shape torch.Size([1024, 3072]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for trans_encoder.1.bert.encoder.layer.0.intermediate.dense.weight: copying a param with shape torch.Size([3072, 256]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for trans_encoder.1.bert.encoder.layer.0.intermediate.dense.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for trans_encoder.1.bert.encoder.layer.0.output.dense.weight: copying a param with shape torch.Size([256, 3072]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for trans_encoder.1.bert.encoder.layer.1.intermediate.dense.weight: copying a param with shape torch.Size([3072, 256]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for trans_encoder.1.bert.encoder.layer.1.intermediate.dense.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for trans_encoder.1.bert.encoder.layer.1.output.dense.weight: copying a param with shape torch.Size([256, 3072]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for trans_encoder.1.bert.encoder.layer.2.intermediate.dense.weight: copying a param with shape torch.Size([3072, 256]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for trans_encoder.1.bert.encoder.layer.2.intermediate.dense.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for trans_encoder.1.bert.encoder.layer.2.output.dense.weight: copying a param with shape torch.Size([256, 3072]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for trans_encoder.1.bert.encoder.layer.3.intermediate.dense.weight: copying a param with shape torch.Size([3072, 256]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for trans_encoder.1.bert.encoder.layer.3.intermediate.dense.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for trans_encoder.1.bert.encoder.layer.3.output.dense.weight: copying a param with shape torch.Size([256, 3072]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for trans_encoder.2.bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([30522, 128]) from checkpoint, the shape in current model is torch.Size([30522, 64]).
size mismatch for trans_encoder.2.bert.embeddings.position_embeddings.weight: copying a param with shape torch.Size([512, 128]) from checkpoint, the shape in current model is torch.Size([512, 64]).
size mismatch for trans_encoder.2.bert.embeddings.token_type_embeddings.weight: copying a param with shape torch.Size([2, 128]) from checkpoint, the shape in current model is torch.Size([2, 64]).
size mismatch for trans_encoder.2.bert.embeddings.LayerNorm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.embeddings.LayerNorm.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.attention.self.query.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.attention.self.query.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.attention.self.key.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.attention.self.key.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.attention.self.value.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.attention.self.value.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.attention.output.dense.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.attention.output.dense.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.attention.output.LayerNorm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.attention.output.LayerNorm.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.intermediate.dense.weight: copying a param with shape torch.Size([3072, 128]) from checkpoint, the shape in current model is torch.Size([256, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.intermediate.dense.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.output.dense.weight: copying a param with shape torch.Size([128, 3072]) from checkpoint, the shape in current model is torch.Size([64, 256]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.output.dense.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.output.LayerNorm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.0.output.LayerNorm.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.attention.self.query.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.attention.self.query.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.attention.self.key.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.attention.self.key.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.attention.self.value.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.attention.self.value.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.attention.output.dense.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.attention.output.dense.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.attention.output.LayerNorm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.attention.output.LayerNorm.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.intermediate.dense.weight: copying a param with shape torch.Size([3072, 128]) from checkpoint, the shape in current model is torch.Size([256, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.intermediate.dense.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.output.dense.weight: copying a param with shape torch.Size([128, 3072]) from checkpoint, the shape in current model is torch.Size([64, 256]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.output.dense.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.output.LayerNorm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.1.output.LayerNorm.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.attention.self.query.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.attention.self.query.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.attention.self.key.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.attention.self.key.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.attention.self.value.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.attention.self.value.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.attention.output.dense.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.attention.output.dense.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.attention.output.LayerNorm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.attention.output.LayerNorm.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.intermediate.dense.weight: copying a param with shape torch.Size([3072, 128]) from checkpoint, the shape in current model is torch.Size([256, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.intermediate.dense.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.output.dense.weight: copying a param with shape torch.Size([128, 3072]) from checkpoint, the shape in current model is torch.Size([64, 256]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.output.dense.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.output.LayerNorm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.2.output.LayerNorm.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.attention.self.query.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.attention.self.query.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.attention.self.key.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.attention.self.key.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.attention.self.value.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.attention.self.value.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.attention.output.dense.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.attention.output.dense.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.attention.output.LayerNorm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.attention.output.LayerNorm.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.intermediate.dense.weight: copying a param with shape torch.Size([3072, 128]) from checkpoint, the shape in current model is torch.Size([256, 64]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.intermediate.dense.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.output.dense.weight: copying a param with shape torch.Size([128, 3072]) from checkpoint, the shape in current model is torch.Size([64, 256]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.output.dense.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.output.LayerNorm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.encoder.layer.3.output.LayerNorm.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.pooler.dense.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for trans_encoder.2.bert.pooler.dense.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.bert.position_embeddings.weight: copying a param with shape torch.Size([512, 128]) from checkpoint, the shape in current model is torch.Size([512, 64]).
size mismatch for trans_encoder.2.bert.img_embedding.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 128]).
size mismatch for trans_encoder.2.bert.img_embedding.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for trans_encoder.2.cls_head.weight: copying a param with shape torch.Size([3, 128]) from checkpoint, the shape in current model is torch.Size([3, 64]).
size mismatch for upsampling.weight: copying a param with shape torch.Size([1723, 431]) from checkpoint, the shape in current model is torch.Size([778, 195]).
size mismatch for upsampling.bias: copying a param with shape torch.Size([1723]) from checkpoint, the shape in current model is torch.Size([778]).
size mismatch for cam_param_fc2.weight: copying a param with shape torch.Size([250, 431]) from checkpoint, the shape in current model is torch.Size([150, 216]).
size mismatch for cam_param_fc2.bias: copying a param with shape torch.Size([250]) from checkpoint, the shape in current model is torch.Size([150]).
size mismatch for cam_param_fc3.weight: copying a param with shape torch.Size([3, 250]) from checkpoint, the shape in current model is torch.Size([3, 150]).