bryanplummer / cite Goto Github PK
View Code? Open in Web Editor NEWImplementation for our paper "Conditional Image-Text Embedding Networks"
License: MIT License
Implementation for our paper "Conditional Image-Text Embedding Networks"
License: MIT License
Hello, I extracted the visual features generated from the RPN using Faster RCNN. I wish to know whether you use the ROIPooling layer while extracting the features? Did you map the boxes to the Pool5 and pass them to a ROIPOOLING layer to get the feature (7*7) and then extract the fc7? Since I did that but it does not work. The acc is only 33%. I thought the way I extracted features is wrong. Thank you.
Hello, I am trying your code and I found the training process is very low. I do not change anything of your code, and I used two GTX 1080 TI. The model may take 18 hours to train an epoch. I do not know whether there is anything wrong or it should be slow like this.
Hi,Thank you for your such great work! I feels a little bit confused about the training data used in your code. The data orgnization you mentioned in https://github.com/BryanPlummer/cite/tree/master/data_processing_example
is in h5 form right? I don't understand the meaning of <pair identifier> in data['pair'] in the h5 file, I guess the later element in the pair means whether this phrase is the ground truth phrase of the image, beacause in your code, you said we can use the augmented phrase for training, but what the meaning of the first element in the pair? Besides, when you count the ground truth phrase of the image, it seems worry in your code:
you count the num of the ground truth phrase before putting the current gt phrase into list. By the way, how did you generate the augmented phrase? can you explain a little bit about that? Is the result in your paper trained with these augmented phrase?
Hello:
I am a master student and am trying to do some jobs about this excellent job. I failed to use your plc-c code to compute the HGLMM features. I spent several days fixing it but it still said some of the words do not exist in the vocabulary. So I decided to use your uploaded text features to do the experiments. But in the ***_imfeats.h5 files, there are no features like phrase_id or phrase_type. I really do not know how to correspond the phrases their coarse category like "people, scene". Could you please tell me how do you do it?
Thank you very much for your precious time!
Hello, I am very sorry to bother you. I wish to run your code to extract the visual features. But I can not find a place to download the fastrcnn_feat.prototxt" file. Could you please upload this file or Could you please tell me which layer (fc7 or fc7 relu) you used in your CITE paper? Thank you very much
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.