GithubHelp home page GithubHelp logo

llsourcell / how_to_make_a_text_summarizer Goto Github PK

View Code? Open in Web Editor NEW
489.0 30.0 304.0 583 KB

This is the code for "How to Make a Text Summarizer - Intro to Deep Learning #10" by Siraj Raval on Youtube

License: MIT License

Jupyter Notebook 100.00%

how_to_make_a_text_summarizer's Introduction

How_to_make_a_text_summarizer

This is the code for "How to Make a Text Summarizer - Intro to Deep Learning #10" by Siraj Raval on Youtube.

Coding Challenge - Due Date - Thursday, March 23rd at 12 PM PST

The challenge for this video is to make a text summarizer for a set of articles with Keras. You can use any textual dataset to do this. By doing this you'll learn more about encoder-decoder architecture and the role of attention in deep learning. Good luck!

Overview

This is the code for this video on Youtube by Siraj Raval as part of the Deep Learning Nanodegree with Udacity. We're using an encoder-decoder architecture to generate a headline from a news article.

Dependencies

  • Tensorflow or Theano
  • Keras
  • python-Levenshtein (pip install python-levenshtein)

Use pip to install any missing dependencies.

Basic Usage

Data

The video example is made from the text at the start of the article, which I call description (or desc), and the text of the original headline (or head). The texts should be already tokenized and the tokens separated by spaces. This is a good example dataset. You can use the 'content' as the 'desc' and the 'title' as the 'head'.

Once you have the data ready save it in a python pickle file as a tuple: (heads, descs, keywords) were heads is a list of all the head strings, descs is a list of all the article strings in the same order and length as heads. I ignore the keywrods information so you can place None.

Here is a link on how to get similar datasets

Build a vocabulary of words

The vocabulary-embedding notebook describes how a dictionary is built for the tokens and how an initial embedding matrix is built from GloVe.

Train a model

The train notebook describes how a model is trained on the data using Keras.

Use model to generate new headlines

The predict notebook generate headlines by the trained model and showes the attention weights used to pick words from the description. The text generation includes a feature which was not described in the original paper, it allows for words that are outside the training vocabulary to be copied from the description to the generated headline.

Examples of headlines generated

Good (cherry-picked) examples of headlines generated: cherry picking of generated headlines cherry picking of generated headlines

Examples of attention weights

attention weights

Credits

The credit for this code goes to udibr i've merely created a wrapper to make it easier to get started.

how_to_make_a_text_summarizer's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

how_to_make_a_text_summarizer's Issues

Regarding Word embedding -- read GloVe

When I am executing this one

glove_n_symbols = !wc -l {glove_name}
glove_n_symbols = int(glove_n_symbols[0].split()[0])
glove_n_symbols

Then I am getting following error.

ValueError Traceback (most recent call last)
in ()
1 glove_n_symbols = get_ipython().getoutput(u'wc -l {glove_name}')
----> 2 glove_n_symbols = int(glove_n_symbols[0].split()[0])
3 glove_n_symbols

ValueError: invalid literal for int() with base 10: "'wc'"

And according to me, it is correct as well. How come this string will return an integer value? Because a string like "["'wc' is not recognized as an internal or external command,", 'operable program or batch file.']" will return 'wc' and it is not going to be converted into an integer.
Please do reply me as soon as possible.

Thank you!!!

Memory error while working with The Signal Media One-Million News Articles Dataset(2.7GB approx)

I tried creating the vocabulary embeddings with the 'The Signal Media One-Million News Articles Dataset' (which is approximately 2.7GB in size) but it gave me an error on a g2.8xlarge instance. Not sure what I am doing wrong here.
The vocabulary-embedding.py runs as expected but while training the model it is giving a memory error.
I also tried distributing the model on the 4 GPUs that are available.

Any hack for this, or any code snippet or alternate dataset that could help me solve this problem.

'NoneType' object has no attribute '__getitem__'

@llSourcell @rtlee9 The following error come up in 'NoneType' object has no attribute '__getitem__':

activation_energies = activation_energies + -1e20*K.expand_dims(1.-K.cast(mask[:, :maxlend],'float32'),1).

This is happening when running:
model.add(SimpleContext(name='simplecontext_1'))

Using:

keras.__version__: 2.0.4
tf.__version__: 1.3.0

Any idea why is this happening?
I think is a good idea to add a DockerFile in your projects. thnks

No weights found: While running train.py

IOError: Unable to open file (Unable to open file: name = 'data/train.hdf5', errno = 2, error message = 'no such file or directory', flags = 0, o_flags = 0)

While loading the weights, the weights are not found and I got the above error message.

can not find the dataset

could you give me your dataset? I can not find it in which website you have provided. Please!

Where is `get_file`?

In the file vocabulary-embedding.ipynb (https://github.com/llSourcell/How_to_make_a_text_summarizer/blob/master/vocabulary-embedding.ipynb),

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-21-ff6b5a000c0e> in <module>()
      9 if not os.path.exists(glove_name):
     10     path = 'glove.6B.zip'
---> 11     path = get_file(path, origin="http://nlp.stanford.edu/data/glove.6B.zip")
     12     get_ipython().system(u'unzip {datadir}/{path}')

NameError: name 'get_file' is not defined

Loss minimum

What is about the loss ? What is the minimum you could achieve? How many iterations you need to get the optimum loss (not all 500 iterations for example)?

Could not understand the "Read Glove" Cell

I have the dataset downloaded from that site how can I use it, The code given is downloading the dataset.
Have anybody did this on python 3.x, I think many issues I faced are due to the difference in python versions.

'unzip' is not recognized as an internal or external command, operable program or batch file.

Can someone help me with issue?

when i am trying to run this block

fname = 'glove.6B.%dd.txt'%embedding_dim
import os
datadir_base = os.path.expanduser(os.path.join('~', '.keras'))
if not os.access(datadir_base, os.W_OK):
datadir_base = os.path.join('/tmp', '.keras')
datadir = os.path.join(datadir_base, 'datasets')
glove_name = os.path.join(datadir, fname)
if not os.path.exists(glove_name):
path = 'glove.6B.zip'
path = get_file(path, origin="http://nlp.stanford.edu/data/glove.6B.zip")
!unzip {datadir}/{path}

i am getting this error

'unzip' is not recognized as an internal or external command,
operable program or batch file.

could not find the post processing module

The video tutorial on youtube mentions a postprocessing module which has been used to build the glove matrix. I could not find any such module or the specific function. I would be thankful if someone could help me out with this.

Query regarding GloVe Matrix initialization

Why are we using np.sqrt(12)/2 and multiplying it to the standard deviation of glove vectors and using it to generate random numbers??
And why are multiplying 0.1 to the Glove Matrix?

FileNotFoundError data/%s.pkl

can someone help me to find this file(data/%s.pkl) ? Thanks

#step 1 - load data
with open('data/%s.pkl', 'rb') as fp:
heads, desc, keywords = pickle.load(fp)


FileNotFoundError Traceback (most recent call last)
in ()
1 #step 1 - load data
----> 2 with open('data/%s.pkl', 'rb') as fp:
3 heads, desc, keywords = pickle.load(fp)

FileNotFoundError: [Errno 2] No such file or directory: 'data/%s.pkl'

Provide Trained weights

Hi @llSourcell I am trying to train the model but it is taking a lot of time to do so, so can you provide the trained weights (if you or anyone else have successfully trained and saved the weights)?

Providing the trained weights would be great for someone who is just trying to play around with the model and test it out with different datasets.

Cheers

Can't get past training

Siraj,
I forked this repo and added a parser for the signal dataset; also a frozen list of working packages. Can you let me know if this model trains successfully for you? Has anyone been able to get this working? I'm not sure what a 'typical' training time is on the signal dataset and/or if there's any subtle bugs in the code/parser written. Any review appreciated.

DLND Student
LRParser

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.