GithubHelp home page GithubHelp logo

Comments (11)

ricardorei avatar ricardorei commented on May 28, 2024

You are right. The diagram is correct but the hparams are confusing because that flag is actually not used for this model.

Contrary to RegressionMetric models where the different pooling options influence the sentence embedding computation, in UnifiedMetric models. we always use the same pooling technique (The CLS token)

from comet.

ricardorei avatar ricardorei commented on May 28, 2024

The config in the YAML is just there because all classes inherit from CometModel

from comet.

vince62s avatar vince62s commented on May 28, 2024

but why ?
does it spit a proper sentence score if we average all tokens embeddings ?

from comet.

ricardorei avatar ricardorei commented on May 28, 2024

This was something I did a couple of tests and it was not worth it for models where we perform cross-encoding. With a model like cometkiwi where target and source are encoded together and the self-attention can look at the same time to both sentences, the representations that are captured in the CLS token are superior than performing average pooling across the entire input.

Another thing we tried was to just gather the embeddings of the target (which already received attention from the source) and average those... the result is very similar to use the CLS token only and it complicates code a bit because you have to keep track of the separator tokens in the middle of the sentence. So the decision was based on performance and simplicity...

This is not the case for other models where there is no attention between sentences... for those models we saw benefits in doing average pooling. Btw our experiments seem to validate some finding from retrieval tasks where there is this long debate about cross-encoding vs dual encoding with average pooling.

from comet.

vince62s avatar vince62s commented on May 28, 2024

last ablation question. did you try the same arch without the layerwise_attention ? does it bring a lot ?

from comet.

ricardorei avatar ricardorei commented on May 28, 2024

I did, it's basically the same performance.

For some tasks different layers can give you different results and some layers might be better than others. The idea behind using the layerwise_attention was to reduce the need of that search when doing hyper-parameter search and I found out it worked well... additionally, we could eventually prune top layers if needed but we end up not doing it. We describe the layer pruning here.

Anyway, training a model without the layerwise_attention will eventually lead to similar results and its not an absolute need.

from comet.

vince62s avatar vince62s commented on May 28, 2024

thanks for this.

Another question: I scored a dataset with cometkiwi-XL and I trained a xlm-roberta-large based on the dataset / scores from cometkiwi-XL

It barely improves the "original" wmt22-cometkiwi-da model.

It means that it is quite difficult to distillate the cometkiwi-XL to a smaller model. did you observe the same?

from comet.

ricardorei avatar ricardorei commented on May 28, 2024

Yes, it's hard to distil an XXL/XL model into a large model.... I believe this is the case because the large model is already close to the XL and XXL models. There is not a lot of improvements with scale.

I had a student working on distillation that had nice results distilling XL/XXL into a model based on MiniLM V2. The resulting model is fast and has good performance... Its a bit better than training with the annotations from WMT

from comet.

vince62s avatar vince62s commented on May 28, 2024

hmm I am surprised in my tests XL is much better than Large. I have not tested XXL but based on your last paper it seems marginal improvement to XL.

from comet.

ricardorei avatar ricardorei commented on May 28, 2024

It's true. The improvements from XL to Large are marginal. You notice a bit bigger improvements when going to XXL but for it's size, the improvements are not that big. I think this is the case because InfoXLM is a really strong encoder for its size. XLM-R XL and XXL are also undertrained for their size. They have not been trained enough.... unfortunately no one seems to be interested in training large multilingual encoders anymore

from comet.

vince62s avatar vince62s commented on May 28, 2024

I think we are not saying the same thing:)

from comet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.