GithubHelp home page GithubHelp logo

cbaziotis / seq3 Goto Github PK

View Code? Open in Web Editor NEW
124.0 8.0 18.0 16.97 MB

Source code for the NAACL 2019 paper "SEQ^3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression"

Python 98.52% Shell 1.48%
gumbel-softmax seq2seq seq2seq2seq sentence-compression summarization abstractive-summarization autoencoder

seq3's Introduction

This repository contains source code for the ΝΑACL 2019 paper "SEQ^3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression" (Paper).

Introduction

The paper presents a sequence-to-sequence-to-sequence (SEQ3) autoencoder consisting of two chained encoder-decoder pairs, with words used as a sequence of discrete latent variables. We employ continuous approximations to sampling from categorical distributions, in order to generate the latent sequence of words. This enables gradient-based optimization.

We apply the proposed model to unsupervised abstractive sentence compression, where the first and last sequences are the input and reconstructed sentences, respectively, while the middle sequence is the compressed sentence. Constraining the length of the latent word sequences forces the model to distill important information from the input.

Architecture

Reference
@inproceedings{baziotis2019naacl,
    title = {\textsc{SEQ}\textsuperscript{3}: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression},
    author = {Christos Baziotis and Ion Androutsopoulos and Ioannis Konstas and Alexandros Potamianos},
    booktitle = {Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL:HLT)},
    address = {Minneapolis, USA},
    month = {June},
    url = {https://arxiv.org/abs/1904.03651},
    year = {2019}
}

Prerequisites

Dependencies

  • PyTorch version >= 1.0.0
  • Python version >= 3.6

Install Requirements

Create Environment (Optional): Ideally, you should create an environment for the project.

conda create -n seq3 python=3
conda activate seq3

Install PyTorch 1.0 with the desired Cuda version if you want to use the GPU and then the rest of the requirements:

pip install -r requirements.txt

Download Data

To train the model you need to download the training data and the pretrained word embeddings.

Dataset: In our experiments we used the Gigaword dataset, which can be downloaded from: https://github.com/harvardnlp/sent-summary Extract the data in datasets/gigaword/ and organize the files as:

datasets
└── gigaword
      └── dev/
      └── test1951/
      └── train.article.txt
      └── train.title.txt
      └── valid.article.filter.txt
      └── valid.title.filter.txt

Included in datasets/gigaword/dev/ you will find a small subset of the source (the target summaries are never used) training data, i.e., the articles, which were used for prototyping, as well as a dev set with 4K parallel sentences for evaluation.

You can also use your own data, as long as the source and target data are text files with one sentence per line.

Embeddings: In our experiments we used the "Wikipedia 2014 + Gigaword 5" (6B) Glove embeddings: http://nlp.stanford.edu/data/wordvecs/glove.6B.zip Put the embedding files in the embeddings/ directory.

Training

In order to train a model, either the LM or SEQ3, you need to run the corresponding python script and pass as an argument a yaml model config. The yaml config specifies everything regarding the experiment to be executed. Therefore, if you want to make any changes to a model, change or create a new yaml config. The model config files are under the model_configs/ directory. Use the provided configs as reference. Each parameter is documented in comments, although most of them are self-explanatory.

Train the Language Model prior

In our experiments we trained the LM on the source (only) sentences of the Gigaword dataset.

python models/sent_lm.py --config model_configs/camera/lm_prior.yaml 

After the training ends, the checkpoint with the best validation loss will be saved under the directory checkpoints/.

Train SEQ3

Training the LM prior is a prerequisite for training SEQ3. While you will still be able to train the model without it, the LM prior loss will be disabled.

python models/seq3.py --config model_configs/camera/seq3.full.yaml 

Prototyping: You can experiment with SEQ3 without downloading the full training data, by training with the configs model_configs/lm.yaml and model_configs/seq3.yaml, respectively, which use the small subset of the training data.

Troubleshooting

  • If you get the error ModuleNotFoundError: No module named 'X', then add the directory X to your PYTHONPATH in your ~/.bashrc, or simply:

    export PYTHONPATH='.'
    

seq3's People

Contributors

cbaziotis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

seq3's Issues

Failed to run seq3 trainer

The file 'utils/eval.py' contains several errors:

  1. rouge.Rouge has no init parameters like: 'max_n', 'limit_length', etc; even very old version doesn't contain these parameters;
  2. rouge metrics has no 'rouge-n', but 'rouge-1', 'rouge-2';

$ python models/seq3.py --config model_configs/seq3.yaml

Traceback (most recent call last):-----------------] Time: 0m 8s (-23m 15s)
  File "models/seq3.py", line 423, in <module>
    train_loss = trainer.train_epoch()
  File "/home/jinzy/work/compression/seq3/modules/training/trainer.py", line 118, in train_epoch
    c(batch, losses, loss_list, batch_outputs)
  File "models/seq3.py", line 368, in eval_callback
    scores = rouge_file_list(config["data"]["ref_path"], hyps)
  File "./utils/eval.py", line 33, in rouge_file_list
    scores = rouge_lists(refs, hyps_list)
  File "./utils/eval.py", line 17, in rouge_lists
    stemming=True)
TypeError: __init__() got an unexpected keyword argument 'max_n'

rouge init max_2 deprecated

max_n=2
limit_length

Total Params: 4.6M
Total Trainable Params: 4.6M
Epoch:1, Batch:100/15800 (1%) [--------------------] Time: 0m 10s (-28m 40s)
Traceback (most recent call last):
  File "models/seq3.py", line 423, in <module>
    train_loss = trainer.train_epoch()
  File "/users/jwilliams/code/seq3/modules/training/trainer.py", line 118, in train_epoch
    c(batch, losses, loss_list, batch_outputs)
  File "models/seq3.py", line 368, in eval_callback
    scores = rouge_file_list(config["data"]["ref_path"], hyps)
  File "./utils/eval.py", line 33, in rouge_file_list
    scores = rouge_lists(refs, hyps_list)
  File "./utils/eval.py", line 17, in rouge_lists
    stemming=True)
TypeError: __init__() got an unexpected keyword argument 'limit_length'

Model size, convergence times, etc.

Hello,

I am trying to reproduce your results, but I can't seem to fit the model in memory. Could you please tell me your hardware specs as well as whether you ran into out-of-memory issues.

Also could you please motivate your choice of recurrent encoder-decoder models instead of the more recent transformer architecture.

Great work, looking forward to hearing from you.

LM prior checkpoint naming

Hi, thanks for the code.
Just want to confirm, because the seq3 training is looking for lm_giga_articles_all_10K.pt, while the lm prior training saves as lm_prior.pt
I guess I can simply rename it to make it work?
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.