GithubHelp home page GithubHelp logo

batra-mlp-lab / visdial-rl Goto Github PK

View Code? Open in Web Editor NEW
171.0 14.0 39.0 1.38 MB

PyTorch code for Learning Cooperative Visual Dialog Agents using Deep Reinforcement Learning

Shell 1.96% Python 88.96% Lua 7.27% HTML 0.46% JavaScript 1.35%
pytorch computer-vision natural-language-processing deep-learning visual-dialog

visdial-rl's People

Contributors

akshitac8 avatar nirbhayjm avatar virajprabhu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

visdial-rl's Issues

Somehow -visdomServer is overwritten in an unexpected way

This is related to #2
I ran
python -m visdom.server -p 8097
in order to use visdom.

After that, I ran
python evaluate.py -useGPU \ -startFrom checkpoints/abot_sl_ep60.vd \ -qstartFrom checkpoints/qbot_sl_ep60.vd \ -evalMode ABotRank QBotRank \ -visdomServer http://127.0.0.1 \ -visdomServerPort 8097 \ -visdomEnv ABotRank-QBotRank
Even though I clearly designated which server to use and the server port number, somehow they are overwritten as follows. How can I fix this?

{'CELossCoeff': 200,
'batchSize': 20,
'beamSize': 1,
'ckpt_iterid': 152160,
'ckpt_lRate': 4.999614155150613e-05,
'cocoDir': '',
'cocoInfo': '',
'continue': True,
'decoder': 'gen',
'dropout': 0.0,
'embedSize': 300,
'enableVisdom': 1,
'encoder': 'hre-ques-lateim-hist',
'evalModeList': ['ABotRank', 'QBotRank'],
'evalSplit': 'val',
'evalTitle': 'eval',
'featLossCoeff': 1000,
'freezeQFeatNet': 0,
'imgEmbedSize': 300,
'imgFeatureSize': 4096,
'imgNorm': 1,
'inputImg': 'data/visdial/data_img.h5',
'inputJson': 'data/visdial/chat_processed_params.json',
'inputQues': 'data/visdial/chat_processed_data.h5',
'learningRate': 0.001,
'lrDecayRate': 0.9997592083,
'minLRate': 5e-05,
'numEpochs': 65,
'numLayers': 2,
'numRounds': 10,
'numWorkers': 2,
'qdecoder': 'gen',
'qencoder': 'hre-ques-lateim-hist',
'qstartFrom': 'checkpoints/qbot_sl_ep60.vd',
'randomSeed': 32,
'rlAbotReward': 1,
'rnnHiddenSize': 512,
'saveName': 'debug-final5_sl-qbot',
'savePath': 'checkpoints/debug-final5_sl-qbot',
'startFrom': 'checkpoints/abot_sl_ep60.vd',
'trainMode': 'sl-abot',
'useCurriculum': 1,
'useGPU': True,
'useHistory': True,
'useIm': 'late',
'verbose': 1,
'visdomEnv': 'ABotRank-QBotRank',
'visdomServer': 'http://walle.cc.gatech.edu',
'visdomServerPort': 8893,
'vocabSize': 7826}

So many UNK in captions

Thanks to your kindness, I managed to run your code.
By the way, here is one more question.
I ran

python evaluate.py -useGPU \
    -startFrom checkpoints/abot_rl_ep20.vd \
    -qstartFrom checkpoints/qbot_rl_ep20.vd \
    -evalMode dialog \
    -cocoDir /my/path/to/coco/images/ \
    -cocoInfo /my/path/to/coco.json \
    -beamSize 5

then implemented

cd dialog_output/
python -m http.server 8000

however, I found that the visualized captions were quite different from those on your pic
There were so many "UNK" in my result. Is it natural? Or not?
And can you tell me in what condition I could make similar results to yours?

0622-1_rl_ep_20

2018-06-28 23 24 26

REINFORCE call in the decoder base class needs a detach call on top of the reward.

loss += -1 * log_prob * (reward * (self.mask[:, t].float()))

Following the paper, the above should be replaced by
loss += -1 * log_prob * (reward.detach() * (self.mask[:, t].float()))

Not having a .detach() on the reward here provides another source of gradients to the feature regression module in addition to the feature loss, the only difference being these gradients are scaled by the log-probs, which does not seem to mean anything intuitively.

Beam search backtracking

Hi,

Thank you providing this code.

Could you explain your code for backtracking in beam search.

In particular how do you handle the dropped sequences that have seen EOS earlier during forward phase as done in this implementation.

RuntimeError: CUDA error: out of memory

I ran the command "python train.py -useGPU -trainMode sl-abot". However, I got such error "RuntimeError: CUDA error: out of memory". How much GPU memory is needed when training the model? The memory of my GPU is 12212MiB. Thanks!

About -cocoDir and -cocoInfo

When I tried to run evaluate.py with “-evalMode dialog” like this
python evaluate.py -useGPU \ -startFrom checkpoints/abot_rl_ep20.vd \ -qstartFrom checkpoints/qbot_rl_ep20.vd \ -evalMode dialog \ -beamSize 5
, I had an error as follows

[Error] Need coco directory and info as input to -cocoDir and -cocoInfo arguments for locating coco image files.
Exiting dialogDump without saving files.

What should I do with this? Is it appropriate for me to make a path to the directory which has MSCOCO dataset?

In addition what are these two(-cocoDir, -cocoInfo) used for?

How to run evaluate.py

Hello.
Thank you for sharing your great work!

After running
python evaluate.py -useGPU \ -startFrom checkpoints/abot_sl_ep60.vd \ -qstartFrom checkpoints/qbot_sl_ep60.vd \ -evalMode ABotRank QBotRank>
, I had lots of the following error

[Errno110] Connection timed out

I guess before running
python evaluate.py -useGPU \ -startFrom checkpoints/abot_sl_ep60.vd \ -qstartFrom checkpoints/qbot_sl_ep60.vd \ -evalMode ABotRank QBotRank
, I should run
python -m visdom.server -p <port>
in order to visualize the evaluation result.
Is it correct?

If not, how should I deal with this code?

Error Creating data mats for train....

kindly help me error is occur during executing of prepro.py file error is
Traceback (most recent call last):
File "prepro.py", line 274, in
captions_train, captions_train_len, questions_train, questions_train_len, answers_train, answers_train_len, options_train, options_train_list, options_train_len, answers_train_index, images_train_index, images_train_list, _ = create_data_mats(data_train_toks, ques_train_inds, ans_train_inds, args, 'train')
File "prepro.py", line 127, in create_data_mats
options[i][j] = np.array(data_toks[image_id]['dialog'][j]['answer_options']) + 1
ValueError: could not broadcast input array from shape (104) into shape (100)

RuntimeError: beamTokensTable

Do we really need transpose?Because I got this error:
beamTokensTable[:, :, 0] = topIdx.transpose(0, 1).data
RuntimeError: The expanded size of the tensor (5) must match the existing size (10) at non-singleton dimension 1. Target sizes: [10, 5]. Tensor sizes: [5, 10]

KeyError: 'test_ans_ind'

Hi
When I run this command


CUDA_VISIBLE_DEVICES=4 python train.py -useGPU -trainMode sl-qbot -enableVisdom 1
-visdomServer http://127.0.0.1 -visdomServerPort 8098 -visdomEnv my-qbot-job


I got this error


File "train.py", line 39, in
dataset = VisDialDataset(params, splits)
File "/home/badri/badripatro/workspace_project/visual_dialog/visdial-rl-master_v10/dataloader.py", line 149, in init
self.prepareDataset(dtype)
File "/visdial-rl-master_v10/dataloader.py", line 185, in prepareDataset
self.data[dtype + '_ans_ind'] -= 1
KeyError: 'test_ans_ind'


Should I change in the "propro.py" file?

These are the keys present in "chat_processed_data.h5" file


f = h5py.File("chat_processed_data.h5")
list(f)
['ans_index_train', 'ans_index_val', 'ans_length_test', 'ans_length_train', 'ans_length_val', 'ans_test', 'ans_train', 'ans_val', 'cap_length_test', 'cap_length_train', 'cap_length_val', 'cap_test', 'cap_train', 'cap_val', 'img_pos_test', 'img_pos_train', 'img_pos_val', 'num_rounds_test', 'opt_length_test', 'opt_length_train', 'opt_length_val', 'opt_list_test', 'opt_list_train', 'opt_list_val', 'opt_test', 'opt_train', 'opt_val', 'ques_length_test', 'ques_length_train', 'ques_length_val', 'ques_test', 'ques_train', 'ques_val']


Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.