GithubHelp home page GithubHelp logo

stoensin / adaptiveattention Goto Github PK

View Code? Open in Web Editor NEW

This project forked from jiasenlu/adaptiveattention

0.0 2.0 0.0 3.83 MB

Implementation of "Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning"

Home Page: https://arxiv.org/abs/1612.01887

License: Other

Python 6.24% Jupyter Notebook 70.65% Lua 23.10% Shell 0.01%

adaptiveattention's Introduction

AdaptiveAttention

Implementation of "Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning"

teaser results

Requirements

To train the model require GPU with 12GB Memory, if you do not have GPU, you can directly use the pretrained model for inference.

This code is written in Lua and requires Torch. The preprocssinng code is in Python, and you need to install NLTK if you want to use NLTK to tokenize the caption.

You also need to install the following package in order to sucessfully run the code.

Pretrained Model

The pre-trained model for COCO can be download here. The pre-trained model for Flickr30K can be download here.

Vocabulary File

Download the corresponding Vocabulary file for COCO and Flickr30k

Download Dataset

The first thing you need to do is to download the data and do some preprocessing. Head over to the data/ folder and run the correspodning ipython script. It will download, preprocess and generate coco_raw.json.

Download COCO and Flickr30k image dataset, extract the image and put under somewhere.

training a new model on MS COCO

First train the Language model without finetune the image.

th train.lua -batch_size 20 

When finetune the CNN, load the saved model and train for another 15~20 epoch.

th train.lua -batch_size 16 -startEpoch 21 -start_from 'model_id1_20.t7'

More Result about spatial attention and visual sentinel

teaser results

teaser results

For more visualization result, you can visit here (it will load more than 1000 image and their result...)

Reference

If you use this code as part of any published research, please acknowledge the following paper

@misc{Lu2017Adaptive,
author = {Lu, Jiasen and Xiong, Caiming and Parikh, Devi and Socher, Richard},
title = {Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning},
journal = {CVPR},
year = {2017}
}

Acknowledgement

This code is developed based on NeuralTalk2.

Thanks Torch team and Facebook ResNet implementation.

License

BSD 3-Clause License

adaptiveattention's People

Contributors

jiasenlu avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.