GithubHelp home page GithubHelp logo

codeaudit / treelstm.pytorch Goto Github PK

View Code? Open in Web Editor NEW

This project forked from dasguptar/treelstm.pytorch

0.0 2.0 0.0 35 KB

Tree LSTM implementation in PyTorch

License: MIT License

Python 70.02% Shell 0.50% Java 29.48%

treelstm.pytorch's Introduction

Tree-Structured Long Short-Term Memory Networks

This is a PyTorch implementation of Tree-LSTM as described in the paper Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks by Kai Sheng Tai, Richard Socher, and Christopher Manning. On the semantic similarity task using the SICK dataset, this implementation reaches a Pearson's coefficient of 0.8476 and a MSE of 0.2896.

Requirements

  • Python (tested on 2.7.13 and 3.6.3)
  • PyTorch (tested on 0.1.12 and 0.2.0)
  • tqdm
  • Java >= 8 (for Stanford CoreNLP utilities)

Usage

  • First run the script ./fetch_and_preprocess.sh, which, as the name suggests, does two things:
  • Run python main.py to try the Dependency Tree-LSTM from the paper to predict similarity for pairs of sentences on the SICK dataset. For a list of all command-line arguments, have a look at config.py.
    • The first run takes a few minutes to read and store the GLOVE embeddings for the words in the SICK vocabulary to a cache for future runs. In later runs, only the cache is read in during later runs.
    • Logs and model checkpoints are saved to the checkpoints/ directory with the name specified by the command line argument --expname.

Results

Using hyperparameters --lr 0.01 --wd 0.0001 --optim adagrad --batchsize 25 gives a Pearson's coefficient of 0.8476 and a MSE of 0.2896, as opposed to a Pearson's coefficient of 0.8676 and a MSE of 0.2532 in the original paper. The difference might be because of the way the word embeddings are updated. In the paper, embeddings are updated using plain SGD, separate from the rest of the model, while here the same optimizer updates all the model parameters.

Notes

  • (Nov 08, 2017) Refactored model to get 1.5x - 2x speedup.
  • (Oct 23, 2017) Now works with PyTorch 0.2.0.
  • (May 04, 2017) Added support for sparse tensors. Using the --sparse argument will enable sparse gradient updates for nn.Embedding, potentially reducing memory usage.
    • There are a couple of caveats, however, viz. weight decay will not work in conjunction with sparsity, and results from the original paper might not be reproduced using sparse embeddings.

Acknowledgements

Shout-out to Kai Sheng Tai for the original LuaTorch implementation, and to the Pytorch team for the fun library.

Contact

Riddhiman Dasgupta

This is my first PyTorch based implementation, and might contain bugs. Please let me know if you find any!

License

MIT

treelstm.pytorch's People

Contributors

dasguptar avatar huangshenno1 avatar soumith avatar vinhdv avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.