GithubHelp home page GithubHelp logo

Sentiment analysis - val_pred_acc_1 about mann HOT 6 CLOSED

wzell avatar wzell commented on July 24, 2024
Sentiment analysis - val_pred_acc_1

from mann.

Comments (6)

wzell avatar wzell commented on July 24, 2024

Hello,

nice to see your interest in our project :-).

In the reverse cross-validation procedure, we have a validation set. So here, it is correct.

What are the loss names? val_loss, val_pred_acc_1 and val_pred_acc_2?

from mann.

jecaJeca avatar jecaJeca commented on July 24, 2024

Thanks for answer! For MMD , yes. But, is reverse cross-validation used for output of other models ( NN, CORAL, CMD)? I think that results of those models are based on validation data which is zero.

I am using Keras 2 (Tensorflow backend, I changed code according to artificial_example and add one new layer which takes outputs from encoded layers and calculate cmd loss) so my losses names are : loss, pred_loss,cmd_loss (I put name 'cmd' for Dense layer which is used for calculation of cmd), during training, and during validation they are val_loss, val_pred_loss and val_cmd_loss. I suppose that first loss (loss, val_loss) are correspond to the first output (source data), while second 'pred_loss' and 'val_pred_loss' are for target data.

And, one more (general) question, do you think that accuracy could be good measure for model performance? In my opinion, more important is loss, because accuracy in this problem is just max output, so, I think that is not same if model predicts positive sentiment with 0.51 and 0.89 (for example) output.

Thanks a lot for your time!

from mann.

jecaJeca avatar jecaJeca commented on July 24, 2024

Sorry, maybe I constantly missing something, but in code is written self.load(self.save_weights) (fit function of MANN model, below fit call), which means that you load saved model, isn't it? Also, if results are not validated on saved model but on last, how do you know that last model is best adapted? Training would be stopped if val_loss is not changed for 10 epochs, but does this mean that model converge in terms of domain adaptation problem (is this measure enough informative for domain adaptation problem)? Could we use somehow cmd measure as stopping criteria ?

The numbers 0.51 and 0.89 are randoms, just examples. In my opining, model which predicts for positive answer 0.89 (in case of code it would be array [0.11,0.89]) is better adapted then model which has the same accuracy but output 0.51 for positive answer ([0.49.0.51]). The first model has lower loss compared to second.... Please, let me know if I missed something....

from mann.

wzell avatar wzell commented on July 24, 2024

Hello jecaJeca,

thank you for your help! Finally I identified the issue. Sorry for the closing/re-opening mistake. The evaluations from the paper are obtained with keras 1.1.0. There, the loss 'val_pred_acc' corresponds to the combined accuracy from source and target, the 'val_pred_acc_1' corresponds to the source and 'val_pred_acc_2' corresponds to the target accuracy on the validation set.

losses

Please note that in http://jkx.fudan.edu.cn/~qzhang/paper/acl2018.pdf a slightly improved optimization
for the CMD is proposed. Maybe this can help you!

For other points, you can also write me an e-mail. I will try to answer your questions. Here my thoughts:

  • Unfortunately we cannot know which model is actually best adapted from training, because we have no labels in the target domain. Even from the values of the CMD or of other measures we can never be 100% sure which one is the best model. If for example the labeling functions of source and target (see Ben-David et al. 'A theory of learning from different domains') are different, no domain adaptation algorithm can work well. Normally, domain adaptation approaches in literature fix an early-stopping criteria, like we did, or they decrease the learning rate and let the models converge, or they propose some other heuristic like watching at entropy-values, e.g. minimum-entropy correlation alignment. Finding something out in this direction using the CMD would be a very interesting new contribution to this field.
  • I think this also depends on the numbers of the wrong answers, e.g. [0.49,0.51] is not so bad as [0.1,0.9] for the true value [1,0]. From my point of view, high certainty does not always mean better model nor better adaptation.

Hope I could help you! Please let me know.

from mann.

jecaJeca avatar jecaJeca commented on July 24, 2024

Thank you very much for your answer and time!

from mann.

wzell avatar wzell commented on July 24, 2024

With keras 1.1.0 the 'val_pred_acc_1' corresponds to the source loss.

from mann.

Related Issues (7)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.