GithubHelp home page GithubHelp logo

disan's People

Contributors

taoshen58 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

disan's Issues

Using tree structure not raw text in SNLI dataset

Hi tao,
I am grateful that you share you code and I retrain the model on SNLI dataset, the result is good. But I have a question, the SNLI dataset contains 'raw text' and 'tree structure', you can build word vocabulary and build dataset using raw data and it will be easy, but you do this using the tree structure and it will be complex, why do you choose the latter one? Maybe you do this by two way but the second one has better performance.
thanks very much~

what to do about the var rep_mask in the disan.py?

Hi, thanks for your contributions.
I'm confused about the variable about rep_mask in the DiSA block. The Positional Mask make Elment-wise Add op in the figure 2, but in your code about the function directional_attention_with_dense() : rep_mask_tile=... and attn_mask=... what effect about this 2 lines in this function ?
Another confusion: what to do about the functions mask_for_high_rank() and exp_mask_for_high_rank() ?
thank for your attention.

SST data process

Hi,
Thanks for your great work and your nice code;
I just can't figure why you use STree.txt and SOStr.txt when process the STS datasets?
why not just use dictionary.txt and sentiment_label.txt ?

The net seems use lstm

Hi, I am very glad you share your code and you are so nice. The paper says the DiSAN can achieve state-of-the-art inference quality without any rnn and cnn, but in the code, 'contextual_bi_rnn' is a RNN net. So the best performance is got with the help of LSTM?
Thanks very much

Can you take examples for the Fast-Disa.py?

Hi, thanks for your contrbutions.
I want to use the disan.py to compute some NLP tasks, but foud it needs 24 and more trainable variables, and result to the machine compute slowly? I don't known how to deal it?
In the Fast-disa.py , the rep_tensor dimension is [batc_size,seq_len,channels] but the rep_tensor dimension is [batch_size,seq_len,channels] in the disan.py ? the two dimensions is equal or two explanations? or can you take run examples of the fast-disa.py??
thanks for your help.

About the structure of the code

Hi, When I try to run the test code, I found some errors of 'import' part in 'SNLI_disan/src/model/model_disan' because of the wrong path. So I must change the import code in order to run the code successfully. If the author can restructure the code or change the import code, it will be easier to run the code.
Thanks very much!

word embedding

Hello Tao,
Thank you for sharing your nice code. I need to test other word embeddings on your model, the new word embeddings have a different dimension, not 300. From what I get from the paper we will need to change the following configuration --word_embedding_length and --hidden_units_num to the new dimension. Is that correct?
Thank you in advance

input to fast-disan.py

rep_tensor: tf.float32-[batch_size,seq_len,channels], input sequence tensor;

what is channels here? Is it dimension of each embedding?
i.e. 25,25,300

and what to expect as output?
25,300 ?

question about config parameter "only_sentence"

hi, @taoshen58

Thank you for sharing the code.

I am try to run the Disan on SST dataset as your paper does. I am confusing about the parameter "only_sentence". As far as I know, if it is false as default setting in your code, the sentence phrases is included for training but not include for evaluating.

If I just want to reproduce your performance in your paper, it is correct to just set only_sentence to True ?

tensorboard graph did not show anything

i want to display the graph of disan using tensorflow, and the scalar displays well, while the graph did not show anything. I also checked my environment and tested tensorboard example about mnist. I was confused and want to know the author have successfully checked the tensorboard.

experiments on MSRP

hi, I am very glad that you share your code, it is nice.
I try to train a model on MSRP dataset, but the result seems bad. The model works good at training set but works bad on test set. I am curious that have you tried to train model on MSRP and how about the results?

It seems that without a pretrained embedding input, the results become worse

Thanks for sharing your code.

I have run your code for a click-through rate prediction task in item recommendation scenario. In my setting, each item is seen as a token and users' earlier interactions on items can be seen as sentences. Then I use DiSAN to encode users' interaction sequence. If I replace the pre-trained item embedding with a random initializer embedding, the results evaluated by AUC become worse sharply.

SO I wonder whether DiSAN is suitable for training a token embedding meanwhile for sentence encoding? If not, then a good pre-trained embedding is in deed necessary for DiSAN to get the excellent performance.

For testify the difference before and after tuning the token embedding, I found that the change of embedding is subtle, especially when I calculate the top similar items with the embeddings.

Word Embedding tune

Have you tried to use other word embedding either then Glove?I have tried to replace the current word embedding with other one but I got surprised with the performance that's been dropped to become round 47%

  1. I make all the token in SST unique each one has row and column after with sentence it lies and generated for each of them a new word embedding
  2. I made sure that 'token_seq_digital' has the right indices(I am pretty sure it points on the right one)
  3. The dictionary used by the train, test and dev contains all the token as they become unique
  4. emb_mat_token is now for all the new emb vector the equivalent vector for the dictionary token
    The new word_embedding_length now is 900. Do you have a clue why the performance is dropped in this manner? I expected some enhancement or even the same results but not less.

Perplexity

Hi! Could you tell me the perplexity in your experiments in the end?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.