GithubHelp home page GithubHelp logo

Comments (12)

linjieli222 avatar linjieli222 commented on July 25, 2024

Hi, thank you for your interests. Were you able to evaluate the pretrained model? If so, could you share the evaluated results?

Thanks.

from vqa_regat.

JackHenry1992 avatar JackHenry1992 commented on July 25, 2024

I have already evaluated the pretrained models provided by your project, and it's the same as your paper.

Evalutation VQA-ReGAT
Found 2 GPU cards for eval
loading dictionary from ./data/glove/dictionary.pkl
Evaluating on vqa dataset with model trained on vqa dataset
loading features from h5 file ./data/Bottom-up-features-adaptive/val.hdf5
Setting semantic adj matrix to None...
Setting spatial adj matrix to None...
Building ReGAT model with implicit relation and ban fusion method
In ImplicitRelationEncoder, num of graph propogate steps: 1, residual_connection: True
Loading weights from pretrained_models/regat_implicit/ban_1_implicit_vqa_196/model.pth
        Unexpected_keys: []
        Missing_keys: []
100%|
eval score: 65.96

But training with the same hps.json provided by the pretrained models, the result is too poor..
The training code is the original main.py. I just run python3 main.py --config config/xxx.json
And I have followed all your steps.

from vqa_regat.

linjieli222 avatar linjieli222 commented on July 25, 2024

The pretrained models are trained with 4 GPUs (each of 16GB). Therefore, the effective batch size is 64x4 = 256. Since you are using 2GPUs, the effective batch size for you is 64x2 = 128. My assumption is that the learning rate may be too big for your batch size.
But the accuracy should not be as low. From the log you shown above, even the training accuracy is not improving for the last 6 epochs. I would suspect that there is something wrong with training data. Could you try evaluate the pretrained model on training dataset and share your results?
Thanks!

from vqa_regat.

linjieli222 avatar linjieli222 commented on July 25, 2024

FYI, if you run into similar errors as follows during evaluating, please pull the repo again.

 File "./model/graph_att_layer.py", line 87, in forward
    self.dim_group[1])
RuntimeError: shape '[64, 20, 16, 64]' is invalid for input of size 1245184

Sorry about the inconvenience.

from vqa_regat.

JackHenry1992 avatar JackHenry1992 commented on July 25, 2024

I sincerely appreciate your reply.
The model is trained with batchsize=128*2 (2GPUs, each with 128 batches).
By modifying the args.split='Train', I re-run the evaluation CMD python3 eval.py --output_folder pretrained_models/regat_implicit/ban_1_implicit_vqa_196/, and the score is shown as follows:

2020-01-07-11-21-19
Evaluation VQA-ReGAT
Found 2 GPU cards for eval
loading dictionary from ./data/glove/dictionary.pkl
Evaluating on vqa dataset with model trained on vqa dataset
loading features from h5 file ./data/Bottom-up-features-adaptive/train.hdf5
Setting semantic adj matrix to None...
Setting spatial adj matrix to None...
Building ReGAT model with implicit relation and ban fusion method
In ImplicitRelationEncoder, num of graph propogate steps: 1, residual_connection: True
Loading weights from pretrained_models/regat_implicit/ban_1_implicit_vqa_196/model.pth
        Unexpected_keys: []
        Missing_keys: []
100%|████████████████████████████████| 3467/3467 [05:55<00:00,  9.72it/s
eval score: 83.84

No errors encountered at the evaluation phase. But during training, one error (just epoch 0 will occur) is shown as:

nParams=        46455506
optim: adamax lr=0.0010, decay_step=2, decay_rate=0.25,grad_clip=0.25
LR decay epochs: 15,17,19
  0%|                                                                                              | 0/1734 [00:00<?, ?it/s]gradual warmup lr: 0.0005
100%|████████████████████████████████████| 1734/1734 [11:29<00:00,  1.92it/s]
epoch 0, time: 825.39████████████████████████████████████| 838/838 [02:13<00:00,  5.89it/s]
        train_loss: 100227.52, norm: 12704909.1170, score: 24.98
        eval score: 28.65 (92.66)
        entropy:  0.02
saving current model weights to folder

gradual warmup lr: 0.0010
  0%|                                                                                              | 0/1734 [00:00<?, ?it/s]
Exception in thread Thread-1:                                                            | 2/1734 [00:04<1:22:20,  2.85s/it]
Traceback (most recent call last):
  File "/home/admin/miniconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/home/admin/miniconda3/lib/python3.6/site-packages/tqdm/_monitor.py", line 62, in run
    for instance in self.tqdm_cls._instances:
  File "/home/admin/miniconda3/lib/python3.6/_weakrefset.py", line 60, in __iter__
    for itemref in self.data:
RuntimeError: Set changed size during iteration

100%|███████████████████████████████████| 1734/1734 [10:37<00:00,  1.94it/s]
epoch 1, time: 774.55███████████████████████████████████| 838/838 [02:16<00:00,  6.16it/s]
        train_loss: 22.79, norm: 4514.3939, score: 31.47
        eval score: 33.77 (92.66)
        entropy:  0.04
saving current model weights to folder

Then the model will continue to train as normal. I am not sure whether it affects performance?

from vqa_regat.

linjieli222 avatar linjieli222 commented on July 25, 2024

Thank you for your patience.

I have never seen this error before. It seems to happen inside tqdm package, I don't think it would affect the model performance.

One thing I do notice is that the train_loss and norm are extremely large at epoch 0. Usually they are around 10, not 100, 000. Can you share the exact config file and cmd you used for training? I will try to run it at my end to replicate the error.

Thanks!

from vqa_regat.

JackHenry1992 avatar JackHenry1992 commented on July 25, 2024

Here is my detail config (Cause *.json can not upload, I have changed the filename to hps.txt):

hps.txt

Did you use some pretrained models as the initial parameters of network?

from vqa_regat.

xuewyang avatar xuewyang commented on July 25, 2024

I am running the same codes. But I get better results than yours. I run 20 epochs. But I think more epochs should be run.
epoch 15, time: 997.29
train_loss: 2.95, norm: 1.6762, score: 66.00
eval score: 58.68 (92.66)
entropy: 4.76
saving current model weights to folder
lr: 0.0005
epoch 16, time: 996.12
train_loss: 2.91, norm: 1.9155, score: 66.71
eval score: 58.87 (92.66)
entropy: 4.74
saving current model weights to folder
decreased lr: 0.0001
epoch 17, time: 999.82
train_loss: 2.86, norm: 1.8121, score: 67.62
eval score: 58.88 (92.66)
entropy: 4.74
saving current model weights to folder
lr: 0.0001
epoch 18, time: 994.42
train_loss: 2.85, norm: 1.7187, score: 67.76
eval score: 58.86 (92.66)
entropy: 4.74
saving current model weights to folder
decreased lr: 0.0000
epoch 19, time: 1010.46
train_loss: 2.84, norm: 1.7219, score: 68.01
eval score: 58.84 (92.66)
entropy: 4.74
saving current model weights to folder

from vqa_regat.

xuewyang avatar xuewyang commented on July 25, 2024

Probably lr should be adjusted.

from vqa_regat.

linjieli222 avatar linjieli222 commented on July 25, 2024

Closed due to inactivity. The aforementioned error is not reproducible on my end.

from vqa_regat.

alice-cool avatar alice-cool commented on July 25, 2024

I am running the same codes. But I get better results than yours. I run 20 epochs. But I think more epochs should be run.
epoch 15, time: 997.29
train_loss: 2.95, norm: 1.6762, score: 66.00
eval score: 58.68 (92.66)
entropy: 4.76
saving current model weights to folder
lr: 0.0005
epoch 16, time: 996.12
train_loss: 2.91, norm: 1.9155, score: 66.71
eval score: 58.87 (92.66)
entropy: 4.74
saving current model weights to folder
decreased lr: 0.0001
epoch 17, time: 999.82
train_loss: 2.86, norm: 1.8121, score: 67.62
eval score: 58.88 (92.66)
entropy: 4.74
saving current model weights to folder
lr: 0.0001
epoch 18, time: 994.42
train_loss: 2.85, norm: 1.7187, score: 67.76
eval score: 58.86 (92.66)
entropy: 4.74
saving current model weights to folder
decreased lr: 0.0000
epoch 19, time: 1010.46
train_loss: 2.84, norm: 1.7219, score: 68.01
eval score: 58.84 (92.66)
entropy: 4.74
saving current model weights to folder

How to encode the relation type into the explicit encoder. I found the code didn't represent it. If you have answer, please help me. I am sorry to bother you. I guess the label of relation type is from datasets and the code didn't include the auxiliary classifier for the 15 semantic type and 11 geo type.

from vqa_regat.

linjieli222 avatar linjieli222 commented on July 25, 2024

To replicate results from our paper, please follow the instructions to download the exact data.

For spatial adj matrix, please refer to #9 .
For semantic adj matrix, we are not releasing the model at the moment. But it is a very small and simple classification model trained on Visual Genome, you can refer to this paper:
Ting_Yao_Exploring_Visual_Relationship_ECCV_2018

from vqa_regat.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.