GithubHelp home page GithubHelp logo

Comments (17)

LiJunnan1992 avatar LiJunnan1992 commented on May 22, 2024

Hi,

The dataset json files for pre-training are provided.

Yes for NLVR2 we perform an addition step of pre-training to learn to reason over two images.

from albef.

haoshuai714 avatar haoshuai714 commented on May 22, 2024

Thank you ! But I can not found these files, such as:coco_karpathy_train.json,vg_caption.json,conceptual_caption_train.json, conceptual_caption_val.json,sbu_caption.json. Moreover,you use these files for pretraining NLVR2 task, but these files only have one picture and one caption? How to used for pretraining?

from albef.

LiJunnan1992 avatar LiJunnan1992 commented on May 22, 2024

You can find the pre-training annotation files here: https://storage.googleapis.com/sfr-pcl-data-research/ALBEF/json_pretrain.zip. It is the same 4M data that we use to pre-train ALBEF.

The NLVR2 model is pretrained on the text-assignment task, you can find the details in our paper.

from albef.

haoshuai714 avatar haoshuai714 commented on May 22, 2024

I'm sorry. I didn't get it. ALBEF/configs/NLVR_pretrain.yaml have coco_karpathy_train.json,vg_caption.json,conceptual_caption_train.json;
However, the json_pretraining.zip do not have these files.
In the other word, I want to get the result of Table3-NLVR, what I need to do?

from albef.

haoshuai714 avatar haoshuai714 commented on May 22, 2024

As the readme file have two steps:
step 1:
python -m torch.distributed.launch --nproc_per_node=8 --use_env Pretrain_nlvr.py
--config ./configs/NLVR_pretrain.yaml
--output_dir output/NLVR_pretrain
--checkpoint [Pretrained checkpoint]

step2:
python -m torch.distributed.launch --nproc_per_node=8 --use_env NLVR.py
--config ./configs/NLVR.yaml
--output_dir output/NLVR
--checkpoint [TA pretrained checkpoint]

I want to know hwo to get NLVR_pretrain.yaml (coco_karpathy_train.json,vg_caption.json,conceptual_caption_train.json)?
These files difference from pre_json files

from albef.

LiJunnan1992 avatar LiJunnan1992 commented on May 22, 2024

For step 1, you can use
train_file: ['data/coco.json', 'data/vg.json', 'data/cc3m_train.json', 'data/cc3m_val.json', 'data/sbu.json' ]

Note that you need to modify the image paths in these json files to be your own paths.

from albef.

haoshuai714 avatar haoshuai714 commented on May 22, 2024

Then , I run :
python -m torch.distributed.launch --nproc_per_node=8 --use_env NLVR.py
--config ./configs/NLVR.yaml
--output_dir output/NLVR
--checkpoint [TA pretrained checkpoint]

from albef.

LiJunnan1992 avatar LiJunnan1992 commented on May 22, 2024

Then , I run : python -m torch.distributed.launch --nproc_per_node=8 --use_env NLVR.py --config ./configs/NLVR.yaml --output_dir output/NLVR --checkpoint [TA pretrained checkpoint]

Yes!

from albef.

haoshuai714 avatar haoshuai714 commented on May 22, 2024

The [TA pretrained checkpoint] is the pretraining checkpoint or run the following command :
python -m torch.distributed.launch --nproc_per_node=8 --use_env Pretrain_nlvr.py
--config ./configs/NLVR_pretrain.yaml
--output_dir output/NLVR_pretrain
--checkpoint [Pretrained checkpoint]

from albef.

LiJunnan1992 avatar LiJunnan1992 commented on May 22, 2024

It is the checkpoint after the Pretrain_nlvr step, you can also download it here: https://storage.googleapis.com/sfr-pcl-data-research/ALBEF/pretrain_model_nlvr.pth

from albef.

haoshuai714 avatar haoshuai714 commented on May 22, 2024

Thank your answer. I want to know how to get these files (coco_karpathy_train.json,vg_caption.json,conceptual_caption_train.json) in the Pretrain_nlvr step?
Moreover, why COCO, VG, GCC3M and SUB datasets can be used for Pretrain_nlvr?
What did you do with these datasets?
Thank you!

from albef.

LiJunnan1992 avatar LiJunnan1992 commented on May 22, 2024

coco_karpathy_train.json contains the same annotation as coco.json for pre-training

Please refer to our paper for the description of the TA task.

from albef.

haoshuai714 avatar haoshuai714 commented on May 22, 2024

Sorry, I can not understand.
where I can get these files in the Pretrain_nlvr step?
train_file: ['/export/home/project/VL/dataset/caption/coco_karpathy_train.json',
'/export/home/project/VL/dataset/caption/vg_caption.json',
'/export/home/project/VL/dataset/pretrain_caption/conceptual_caption_train.json',
'/export/home/project/VL/dataset/pretrain_caption/conceptual_caption_val.json',
'/export/home/project/VL/dataset/pretrain_caption/sbu_caption.json'
]

from albef.

haoshuai714 avatar haoshuai714 commented on May 22, 2024

NLVR2 requires an additional pre-training step with text-assignment (TA) to adapt the model for image-pair inputs. In order to perform TA, first set the paths for the json training files in configs/NLVR_pretrain.yaml, then run:

python -m torch.distributed.launch --nproc_per_node=8 --use_env Pretrain_nlvr.py
--config ./configs/NLVR_pretrain.yaml
--output_dir output/NLVR_pretrain
--checkpoint [Pretrained checkpoint]

But, the NLVR_pretrain.yaml file constains pre_training json_file can not found.

from albef.

LiJunnan1992 avatar LiJunnan1992 commented on May 22, 2024

Sorry, I can not understand. where I can get these files in the Pretrain_nlvr step? train_file: ['/export/home/project/VL/dataset/caption/coco_karpathy_train.json', '/export/home/project/VL/dataset/caption/vg_caption.json', '/export/home/project/VL/dataset/pretrain_caption/conceptual_caption_train.json', '/export/home/project/VL/dataset/pretrain_caption/conceptual_caption_val.json', '/export/home/project/VL/dataset/pretrain_caption/sbu_caption.json' ]

Those files are the SAME as ['data/coco.json', 'data/vg.json', 'data/cc3m_train.json', 'data/cc3m_val.json', 'data/sbu.json' ]

from albef.

haoshuai714 avatar haoshuai714 commented on May 22, 2024

OK, Thanks.

from albef.

haoshuai714 avatar haoshuai714 commented on May 22, 2024

Sorry, I can not understand. where I can get these files in the Pretrain_nlvr step? train_file: ['/export/home/project/VL/dataset/caption/coco_karpathy_train.json', '/export/home/project/VL/dataset/caption/vg_caption.json', '/export/home/project/VL/dataset/pretrain_caption/conceptual_caption_train.json', '/export/home/project/VL/dataset/pretrain_caption/conceptual_caption_val.json', '/export/home/project/VL/dataset/pretrain_caption/sbu_caption.json' ]

Those files are the SAME as ['data/coco.json', 'data/vg.json', 'data/cc3m_train.json', 'data/cc3m_val.json', 'data/sbu.json' ]

If those files are the same [data/xx.json]. why to run NLVR_pretrain step?
why not to directly use the pre_training checkpoint ?
I not understand NLVR_pretrain step if theNLVR_pretrain ste json files same the pretraining step.
Because, the NLVR task have two pictures and one text, why the json file same pre_training step?

from albef.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.