GithubHelp home page GithubHelp logo

Word Embedding tune about disan HOT 6 CLOSED

MariamEssam avatar MariamEssam commented on June 26, 2024
Word Embedding tune

from disan.

Comments (6)

taoshen58 avatar taoshen58 commented on June 26, 2024

Sorry for the late reply.

I didn't test any other embedding method because it's not the focus of this paper.

Did you try to shorten the embedding dim to reduce the trainable parameters?

from disan.

MariamEssam avatar MariamEssam commented on June 26, 2024

Thank you for your answer.

  1. What do you mean by trying to shorten the embedding dim?
  2. Where you train the embedding I can't discover it from the code. I know that you fine-tune glove but can't get where you do that
  3. finally and you can skip this if you think it's out of your scope but shall we fine-tune all the embedding or glove only is what we need to do
    Between I know this drop was happening due to the dimensionality of the embedding because I have tested the same with Glove but in a way, I have made emb_mat contains all the token not only unique ones and yes the performance has dropped sharply
    Thank you again Tao and I will appreciate if I get a quick response for these questions

from disan.

taoshen58 avatar taoshen58 commented on June 26, 2024
  1. I mean too long embedding dim will lead to too many parameters required to train. It will be difficult to reach coverage.
  2. I build a trainable word emb matrix which initialized by glove
  3. ... I didnt get you

from disan.

MariamEssam avatar MariamEssam commented on June 26, 2024

I was asking about where in the code we tune the word representations? And in general, if I used another word embedding is it possible to not need this tuning?

from disan.

taoshen58 avatar taoshen58 commented on June 26, 2024

Got u, I learned some basic knowledge about the transferring/pretrained embeddings like ELMo and CoVe within these days.

The other embeddings can be fine-tuned in this framework. In current glove setup, I create a tensorflow trainable variable which is initialized by the pretrained glove. The variable will be automatically fine-tuned by backpropagation.

Therefore, in your setup, just set the variables in word embedding layer or pertained LSTM as "trainable".

from disan.

MariamEssam avatar MariamEssam commented on June 26, 2024

Thank you!

from disan.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.