Comments (11)
You are right. The diagram is correct but the hparams are confusing because that flag is actually not used for this model.
Contrary to RegressionMetric models where the different pooling options influence the sentence embedding computation, in UnifiedMetric models. we always use the same pooling technique (The CLS token)
from comet.
The config in the YAML is just there because all classes inherit from CometModel
from comet.
but why ?
does it spit a proper sentence score if we average all tokens embeddings ?
from comet.
This was something I did a couple of tests and it was not worth it for models where we perform cross-encoding. With a model like cometkiwi where target and source are encoded together and the self-attention can look at the same time to both sentences, the representations that are captured in the CLS token are superior than performing average pooling across the entire input.
Another thing we tried was to just gather the embeddings of the target (which already received attention from the source) and average those... the result is very similar to use the CLS token only and it complicates code a bit because you have to keep track of the separator tokens in the middle of the sentence. So the decision was based on performance and simplicity...
This is not the case for other models where there is no attention between sentences... for those models we saw benefits in doing average pooling. Btw our experiments seem to validate some finding from retrieval tasks where there is this long debate about cross-encoding vs dual encoding with average pooling.
from comet.
last ablation question. did you try the same arch without the layerwise_attention ? does it bring a lot ?
from comet.
I did, it's basically the same performance.
For some tasks different layers can give you different results and some layers might be better than others. The idea behind using the layerwise_attention was to reduce the need of that search when doing hyper-parameter search and I found out it worked well... additionally, we could eventually prune top layers if needed but we end up not doing it. We describe the layer pruning here.
Anyway, training a model without the layerwise_attention will eventually lead to similar results and its not an absolute need.
from comet.
thanks for this.
Another question: I scored a dataset with cometkiwi-XL and I trained a xlm-roberta-large based on the dataset / scores from cometkiwi-XL
It barely improves the "original" wmt22-cometkiwi-da model.
It means that it is quite difficult to distillate the cometkiwi-XL to a smaller model. did you observe the same?
from comet.
Yes, it's hard to distil an XXL/XL model into a large model.... I believe this is the case because the large model is already close to the XL and XXL models. There is not a lot of improvements with scale.
I had a student working on distillation that had nice results distilling XL/XXL into a model based on MiniLM V2. The resulting model is fast and has good performance... Its a bit better than training with the annotations from WMT
from comet.
hmm I am surprised in my tests XL is much better than Large. I have not tested XXL but based on your last paper it seems marginal improvement to XL.
from comet.
It's true. The improvements from XL to Large are marginal. You notice a bit bigger improvements when going to XXL but for it's size, the improvements are not that big. I think this is the case because InfoXLM is a really strong encoder for its size. XLM-R XL and XXL are also undertrained for their size. They have not been trained enough.... unfortunately no one seems to be interested in training large multilingual encoders anymore
from comet.
I think we are not saying the same thing:)
from comet.
Related Issues (20)
- [QUESTION] Train UnifiedMetric/XCOMET with word level predictions. HOT 1
- Sparsemax not actually used in COMET-KIWI, XCOMET-XL/XXL HOT 4
- Invalid link reference of reference-free model in readme
- Minimizing cpu RAM vs only use GPU RAM HOT 1
- what is the precision when load_from_checkpoint?
- Runtime error when loading wmt23-cometkiwi-da-xl HOT 1
- Different scores from different COMET package versions 1.1.2 and 2.2.1 HOT 2
- Different versions of COMET code give different scores with the same model and date.
- [QUESTION] large file scoring HOT 3
- [QUESTION] Splitting big models over multiple GPUs HOT 6
- [QUESTION] Memory footprint HOT 21
- [INPUT] Text Length of Input (source, reference, and hypothesis) HOT 2
- Change the global variable logger to comet_logger HOT 1
- Training script for XCOMET HOT 1
- Safetensors Support
- [QUESTION] OOM when load XCOMET-XXL in A100 with 40G memory for prediction HOT 4
- [QUESTION] why num_layers = num_hidden_layers + 1 HOT 1
- Training data and scripts used for wmt22-cometkiwi-da HOT 3
- Add missing library stubs or py.typed marker
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from comet.