GithubHelp home page GithubHelp logo

keras-indrnn's Introduction

IndRNN in Keras

Keras implementation of the IndRNN model from the paper Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN

Usage

Usage of IndRNNCells

from ind_rnn import IndRNNCell, RNN

cells = [IndRNNCell(128), IndRNNCell(128)]
ip = Input(...)
x = RNN(cells)(ip)
...

Usage of IndRNN layer

from ind_rnn import IndRNN

ip = Input(...)
x = IndRNN(128)(x)

Notes

IndRnn and its associated cell has additional parameters, recurrent_clip_min and recurrent_clip_max which default to -1. -1 indicates that they should take their default values of [0, 2 ^ (1 / Timesteps)] as the clipping range for ReLU activation. If you change the activation function, do not forget to change the clipping ranges as well, or the model may diverge during training.

In Keras, there is implicit detection of the number of timesteps if the shape is well specified during training. Since this clipping is most important during training (for initialization of weights) and also during inference (for clipping range of the recurrent weights), it is adviseable to always specify the number of time steps, even during inference.

Since it may not be possible to determine the number of timesteps for variable timestep problems, the model defaults to a max clipping range of 1.0, which is equivalent to an infinite timestep problem. This may cause issues if the model was trained using a pre-set timestep.

Libraries

  • Keras 2.1.5+
  • Tensorflow / Theano / CNTK (not tested)

Todo

  • Implement IndRNNCell and IndRNN Layer
  • Implement IMDB trial
  • Implement Addition problem trial
  • Implement Sequential MNIST trial
  • See if MNIST converges to the paper results
  • Implement Recurrent BatchNorm
  • Implement Multilayer IndRNN using Residual connections

keras-indrnn's People

Contributors

astupidbear avatar ctudoudou avatar titu1994 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras-indrnn's Issues

very low accuracy with indRNN

hello, I want to train indRNN on data added down an instance of that data, what am I doing wrong please let me know.
its a training X (input) data of 10000,1000, where I added training Y with 10000,1, it's decimal value that I want to predict.
how should be my model look like if I am using indRNN. Any kind of help would be appreciated.
already added layers are given below.
ssssss

How to do "Stateful" prediction

Hi Somshubra,

I was wondering if you could give me some advice. I'm trying to implement auto-regressive prediction. I've trained the model to "predict next" value in a sequence by setting the target to be the sample shifted by one. With vanilla LSTM, you can set Stateful=True and then recursively feed the last prediction as the new input and generate the desired sequence. This works before the LSTM keeps the last state after predicting (until you manually reset it).

How to do this with IndRNN? Would I have to save the final state at every step and then feed it as a parameter of the next prediction step? Thank you for your suggestions!

Best,
Jan

Error when calling IndRNN with an initial state.

Hi, I am getting an error

Traceback (most recent call last):
  File "lstm_seq2seq.py", line 145, in <module>
    initial_state=encoder_state)
  File "/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/layers/recurrent.py", line 570, in __call__
    output = super(RNN, self).__call__(full_input, **kwargs)
  File "/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/engine/base_layer.py", line 431, in __call__
    self.build(unpack_singleton(input_shapes))
  File "/home/ec2-user/test/keras/examples/ind_rnn.py", line 421, in build
    super(IndRNN, self).build(input_shape)
  File "/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/layers/recurrent.py", line 493, in build
    self.cell.build(step_input_shape)
  File "/home/ec2-user/test/keras/examples/ind_rnn.py", line 141, in build
    self.recurrent_clip_max = pow(2.0, 1. / self.timesteps)

when trying to use the IndRNN instead of an LSTM for the example seq2seq implementation in the Keras source.

Here is the code that produces the error:

# Define an input sequence and process it.
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = IndRNN(latent_dim, return_state=True)
encoder_outputs, encoder_state = encoder(encoder_inputs)
# We discard 'encoder_outputs' and only keep the states.
# Set up the decoder, using 'encoder_state' as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = IndRNN(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _ = decoder_lstm(decoder_inputs,
                                     initial_state=encoder_state)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

Got any ideas why this happens?

Great job om in the implementation btw!

Negative input error

Dear Majumdar,
I used your code and works well. When I standadized my data between -1 and 1, it gives error:

InvalidArgumentError: indices[40,496] = -1 is not in [0, 20000)
[[{{node embedding_3/embedding_lookup}}]]

I searched web and try different options, but I could not make it. Do you have any suggestion to correct it?
Regards

ImportError: cannot import name '_generate_dropout_ones'

I'm using Keras 2.1.5 but i am getting this error when I try to run IndRNN:

ImportError Traceback (most recent call last)
in ()
11 from keras.legacy import interfaces
12 from keras.layers import RNN
---> 13 from keras.layers.recurrent import _generate_dropout_mask, _generate_dropout_ones
14
15 class IndRNNCell(Layer):

ImportError: cannot import name '_generate_dropout_ones'

Error when applying Bidirectional wrapper

tried: Bidirectional(IndRNN(5, return_sequences=True), mode='sum')

error occurs:

`Traceback (most recent call last):
......

File "~/anaconda3/lib/python3.6/site-packages/keras/layers/wrappers.py", line 264, in init
self.backward_layer = layer.class.from_config(config)

File "ind_rnn.py", line 528, in from_config
return cls(**config)

File "~/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)

File "ind_rnn.py", line 408, in init
**kwargs)

File "~/anaconda3/lib/python3.6/site-packages/keras/layers/recurrent.py", line 380, in init
super(RNN, self).init(**kwargs)

File "~/anaconda3/lib/python3.6/site-packages/keras/engine/topology.py", line 293, in init
raise TypeError('Keyword argument not understood:', kwarg)

TypeError: ('Keyword argument not understood:', 'depth')
`

couldn't fix it myself, updates will be appreciated!!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.