coastalcph / lex-glue Goto Github PK
View Code? Open in Web Editor NEWLexGLUE: A Benchmark Dataset for Legal Language Understanding in English
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English
Sorry , I can not find "compute_avg_scores.py" file in " statistics"
" python statistics/compute_avg_scores.py --dataset ${TASK} "
Are the datasets available right now?
When i run the code:
from datasets import load_dataset
dataset = load_dataset("lex_glue", "scotus")
i get the following eror:
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.12.1/datasets/lex_glue/lex_glue.py
Hi,
I run the SCOTUS task program in the script of 'run_scotus.sh', but there is an error that can not find the file "compute_avg_scores.py" in the script last sentence: python statistics/compute_avg_scores.py --dataset ${TASK}
How can I solve this problem?
Thank you very much
Hello! Thanks for this great repository. I have tried experiments on many of its subtasks and it works beautifully.
Now a problem is, when I am trying to reproduce the results on EUR-LEX, using run_eurlex.sh
, it fails to give results similar to (or somewhere near) the ones in paper.
VALIDATION | TEST
bert-base-uncased: MICRO-F1: 69.7 ± 0.1 MACRO-F1: 32.8 ± 0.4 | MICRO-F1: 63.1 MACRO-F1: 30.8
( I tried to change the model to legal-base-uncased
, or change the number of epochs from 2 to 20, but these attempts failed too)
Can you help to have a look into this and give some suggestions?
A more detailed log for one of the 5 seeds are as follows:
...
[INFO|trainer.py:1419] 2022-06-27 05:09:06,003 >> ***** Running training *****
[INFO|trainer.py:1420] 2022-06-27 05:09:06,003 >> Num examples = 55000
[INFO|trainer.py:1421] 2022-06-27 05:09:06,003 >> Num Epochs = 2
[INFO|trainer.py:1422] 2022-06-27 05:09:06,003 >> Instantaneous batch size per device = 8
[INFO|trainer.py:1423] 2022-06-27 05:09:06,003 >> Total train batch size (w. parallel, distributed & accumulation) = 8
[INFO|trainer.py:1424] 2022-06-27 05:09:06,003 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1425] 2022-06-27 05:09:06,003 >> Total optimization steps = 13750
{'loss': 0.1809, 'learning_rate': 2.890909090909091e-05, 'epoch': 0.07}
{'loss': 0.1112, 'learning_rate': 2.7818181818181818e-05, 'epoch': 0.15}
{'loss': 0.0966, 'learning_rate': 2.6727272727272728e-05, 'epoch': 0.22}
{'loss': 0.0857, 'learning_rate': 2.5636363636363635e-05, 'epoch': 0.29}
{'loss': 0.0784, 'learning_rate': 2.454545454545455e-05, 'epoch': 0.36}
{'loss': 0.072, 'learning_rate': 2.3454545454545456e-05, 'epoch': 0.44}
{'loss': 0.0676, 'learning_rate': 2.2363636363636366e-05, 'epoch': 0.51}
{'loss': 0.0663, 'learning_rate': 2.1272727272727273e-05, 'epoch': 0.58}
{'loss': 0.0632, 'learning_rate': 2.0181818181818183e-05, 'epoch': 0.65}
{'loss': 0.0603, 'learning_rate': 1.909090909090909e-05, 'epoch': 0.73}
{'loss': 0.0593, 'learning_rate': 1.8e-05, 'epoch': 0.8}
{'loss': 0.0571, 'learning_rate': 1.6909090909090907e-05, 'epoch': 0.87}
{'loss': 0.0551, 'learning_rate': 1.5818181818181818e-05, 'epoch': 0.95}
50%|███████████████████████████████████████████████████████████████████ | 6875/13750 [14:19<14:12, 8.07it/s]
[INFO|trainer.py:622] 2022-06-27 05:23:25,910 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and
have been ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
[INFO|trainer.py:2590] 2022-06-27 05:23:25,913 >> ***** Running Evaluation *****
[INFO|trainer.py:2592] 2022-06-27 05:23:25,914 >> Num examples = 5000
[INFO|trainer.py:2595] 2022-06-27 05:23:25,914 >> Batch size = 8
{'eval_loss': 0.06690910458564758, 'eval_macro-f1': 0.26581931249101237, 'eval_micro-f1': 0.6573569918647109, 'eval_runtime': 25.2148, 'eval_samples_per_second': 198.296, 'eval
_steps_per_second': 24.787, 'epoch': 1.0}
INFO|trainer.py:2340] 2022-06-27 05:23:51,131 >> Saving model checkpoint to logs/062605_eurlex_original/eurlex/bert-base-uncased/se
ed_5/checkpoint-6875
[INFO|configuration_utils.py:446] 2022-06-27 05:23:51,134 >> Configuration saved in logs/062605_eurlex_original/eurlex/bert-base-un
cased/seed_5/checkpoint-6875/config.json
[INFO|modeling_utils.py:1542] 2022-06-27 05:23:52,343 >> Model weights saved in logs/062605_eurlex_original/eurlex/bert-base-uncase
d/seed_5/checkpoint-6875/pytorch_model.bin
[INFO|tokenization_utils_base.py:2108] 2022-06-27 05:23:52,345 >> tokenizer config file saved in logs/062605_eurlex_original/eurlex
/bert-base-uncased/seed_5/checkpoint-6875/tokenizer_config.json
[INFO|tokenization_utils_base.py:2114] 2022-06-27 05:23:52,346 >> Special tokens file saved in logs/062605_eurlex_original/eurlex/b
ert-base-uncased/seed_5/checkpoint-6875/special_tokens_map.json
{'loss': 0.0546, 'learning_rate': 1.4727272727272728e-05, 'epoch': 1.02}
{'loss': 0.0531, 'learning_rate': 1.3636363636363637e-05, 'epoch': 1.09}
{'loss': 0.0518, 'learning_rate': 1.2545454545454545e-05, 'epoch': 1.16}
{'loss': 0.0521, 'learning_rate': 1.1454545454545455e-05, 'epoch': 1.24}
{'loss': 0.0497, 'learning_rate': 1.0363636363636364e-05, 'epoch': 1.31}
{'loss': 0.0481, 'learning_rate': 9.272727272727273e-06, 'epoch': 1.38}
{'loss': 0.0487, 'learning_rate': 8.181818181818181e-06, 'epoch': 1.45}
{'loss': 0.0488, 'learning_rate': 7.090909090909091e-06, 'epoch': 1.53}
{'loss': 0.0477, 'learning_rate': 6e-06, 'epoch': 1.6}
{'loss': 0.0476, 'learning_rate': 4.90909090909091e-06, 'epoch': 1.67}
{'loss': 0.047, 'learning_rate': 3.818181818181818e-06, 'epoch': 1.75}
{'loss': 0.0471, 'learning_rate': 2.7294545454545455e-06, 'epoch': 1.82}
{'loss': 0.0462, 'learning_rate': 1.6385454545454545e-06, 'epoch': 1.89}
{'loss': 0.0466, 'learning_rate': 5.476363636363636e-07, 'epoch': 1.96}
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13750/13750 [29:11<00:00, 7.96it/s]
[INFO|trainer.py:622] 2022-06-27 05:38:17,987 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and
have been ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
[INFO|trainer.py:2590] 2022-06-27 05:38:17,989 >> ***** Running Evaluation *****
[INFO|trainer.py:2592] 2022-06-27 05:38:17,989 >> Num examples = 5000
[INFO|trainer.py:2595] 2022-06-27 05:38:17,989 >> Batch size = 8
{'eval_loss': 0.06163998320698738, 'eval_macro-f1': 0.3223906812379972, 'eval_micro-f1': 0.6903704623792815, 'eval_runtime': 24.2671, 'eval_samples_per_second': 206.041, 'eval_
steps_per_second': 25.755, 'epoch': 2.0}
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13750/13750 [29:36<00:00, 7.96it/s[
INFO|trainer.py:2340] 2022-06-27 05:38:42,258 >> Saving model checkpoint to logs/062605_eurlex_original/eurlex/bert-base-uncased/se
ed_5/checkpoint-13750
[INFO|configuration_utils.py:446] 2022-06-27 05:38:42,261 >> Configuration saved in logs/062605_eurlex_original/eurlex/bert-base-un
cased/seed_5/checkpoint-13750/config.json
[INFO|modeling_utils.py:1542] 2022-06-27 05:38:43,511 >> Model weights saved in logs/062605_eurlex_original/eurlex/bert-base-uncase
d/seed_5/checkpoint-13750/pytorch_model.bin
[INFO|tokenization_utils_base.py:2108] 2022-06-27 05:38:43,513 >> tokenizer config file saved in logs/062605_eurlex_original/eurlex
/bert-base-uncased/seed_5/checkpoint-13750/tokenizer_config.json
[INFO|tokenization_utils_base.py:2114] 2022-06-27 05:38:43,513 >> Special tokens file saved in logs/062605_eurlex_original/eurlex/b
ert-base-uncased/seed_5/checkpoint-13750/special_tokens_map.json
[INFO|trainer.py:1662] 2022-06-27 05:38:46,057 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|trainer.py:1727] 2022-06-27 05:38:46,057 >> Loading best model from logs/062605_eurlex_original/eurlex/bert-base-uncased/seed
_5/checkpoint-13750 (score: 0.6903704623792815).
{'train_runtime': 1781.228, 'train_samples_per_second': 61.755, 'train_steps_per_second': 7.719, 'train_loss': 0.06421310944990678, 'epoch': 2.0}
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13750/13750 [29:41<00:00, 7.72it/s]
[INFO|trainer.py:2340] 2022-06-27 05:38:47,236 >> Saving model checkpoint to logs/062605_eurlex_original/eurlex/bert-base-uncased/s
eed_5
[INFO|configuration_utils.py:446] 2022-06-27 05:38:47,261 >> Configuration saved in logs/062605_eurlex_original/eurlex/bert-base-un
cased/seed_5/config.json
[INFO|modeling_utils.py:1542] 2022-06-27 05:38:48,560 >> Model weights saved in logs/062605_eurlex_original/eurlex/bert-base-uncase
d/seed_5/pytorch_model.bin
[INFO|tokenization_utils_base.py:2108] 2022-06-27 05:38:48,562 >> tokenizer config file saved in logs/062605_eurlex_original/eurlex
/bert-base-uncased/seed_5/tokenizer_config.json
[INFO|tokenization_utils_base.py:2114] 2022-06-27 05:38:48,563 >> Special tokens file saved in logs/062605_eurlex_original/eurlex/b
ert-base-uncased/seed_5/special_tokens_map.json
***** train metrics *****
epoch = 2.0
train_loss = 0.0642
train_runtime = 0:29:41.22
train_samples = 55000
train_samples_per_second = 61.755
train_steps_per_second = 7.719
06/27/2022 05:38:48 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:622] 2022-06-27 05:38:48,611 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and
have been ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
[INFO|trainer.py:2590] 2022-06-27 05:38:48,620 >> ***** Running Evaluation *****
[INFO|trainer.py:2592] 2022-06-27 05:38:48,620 >> Num examples = 5000
[INFO|trainer.py:2595] 2022-06-27 05:38:48,620 >> Batch size = 8
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 625/625 [00:22<00:00, 27.85it/s]
***** eval metrics *****
epoch = 2.0
eval_loss = 0.0616
eval_macro-f1 = 0.3224
eval_micro-f1 = 0.6904
eval_runtime = 0:00:22.48
eval_samples = 5000
eval_samples_per_second = 222.372
eval_steps_per_second = 27.796
06/27/2022 05:39:11 - INFO - __main__ - *** Predict ***
[INFO|trainer.py:622] 2022-06-27 05:39:11,101 >> The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have b
een ignored: text. If text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
[INFO|trainer.py:2590] 2022-06-27 05:39:11,106 >> ***** Running Prediction *****
[INFO|trainer.py:2592] 2022-06-27 05:39:11,106 >> Num examples = 5000
[INFO|trainer.py:2595] 2022-06-27 05:39:11,106 >> Batch size = 8
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 625/625 [00:22<00:00, 27.64it/s]
***** predict metrics *****
predict_loss = 0.0712
predict_macro-f1 = 0.2969
predict_micro-f1 = 0.6196
predict_runtime = 0:00:22.44
predict_samples = 5000
predict_samples_per_second = 222.741
predict_steps_per_second = 27.843
...
Hi, i used the scripts and everything worked fine, i was able to train the models without any trouble.
The results shown with the testing after training are also coherent.
But the issue (at the end of the message) occurred when i tried to load the model it order to test to predict other samples.
It is not possible to load the model because there is a difference between the names of the layers expected and the layers in the file. As we can see in the error message (at the end), there are double occurences of "encoder" in some layer names of the saved file. When loading, the model does not use those layer names.
This problem happens with ECtHR (A & B) and Scotus tasks (maybe even others) with Bert models, it seems that the issue occurs when using hierarchical variant. When not using hierarchical, we dont have any problem to load the models after saving them. But the results are not as performant as they should be.
Do you have the same issue ? I am using Ubuntu 20.04 with python 3.8.
[WARNING|modeling_utils.py:1501] 2022-03-28 18:01:33,192 >> Some weights of the model checkpoint at /home/X/Xs/lex-glue/seed_1 were not used when initializing BertForSequenceClassification: ['bert.encoder.encoder.layer.4.attention.self.query.weight', 'bert.seg_encoder.layers.1.self_attn.out_proj.weight', 'bert.encoder.encoder.layer.8.attention.self.query.bias', 'bert.seg_encoder.layers.1.norm2.weight', 'bert.encoder.encoder.layer.10.output.dense.bias', 'bert.encoder.encoder.layer.11.attention.output.LayerNorm.bias', 'bert.encoder.encoder.layer.0.attention.self.key.weight', 'bert.encoder.encoder.layer.5.intermediate.dense.weight', 'bert.encoder.encoder.layer.7.attention.output.LayerNorm.weight', 'bert.seg_encoder.layers.1.self_attn.out_proj.bias', 'bert.seg_encoder.layers.0.norm1.bias', 'bert.encoder.encoder.layer.10.attention.self.query.bias', 'bert.encoder.encoder.layer.5.attention.output.dense.weight', 'bert.encoder.encoder.layer.7.attention.self.value.weight', 'bert.seg_encoder.layers.1.norm2.bias', 'bert.encoder.encoder.layer.11.output.LayerNorm.weight', 'bert.encoder.embeddings.token_type_embeddings.weight', 'bert.encoder.embeddings.word_embeddings.weight', 'bert.encoder.encoder.layer.7.output.LayerNorm.bias', 'bert.encoder.encoder.layer.11.intermediate.dense.weight', 'bert.seg_encoder.layers.0.self_attn.out_proj.bias', 'bert.seg_encoder.layers.1.norm1.weight', 'bert.encoder.encoder.layer.10.output.dense.weight', 'bert.seg_encoder.layers.0.norm1.weight', 'bert.encoder.encoder.layer.8.attention.self.value.weight', 'bert.encoder.encoder.layer.5.intermediate.dense.bias', 'bert.encoder.encoder.layer.6.attention.output.LayerNorm.bias', 'bert.encoder.embeddings.LayerNorm.weight', 'bert.encoder.encoder.layer.6.attention.output.LayerNorm.weight', 'bert.encoder.encoder.layer.9.output.dense.weight', 'bert.encoder.encoder.layer.6.output.dense.weight', 'bert.encoder.encoder.layer.2.output.dense.weight', 'bert.encoder.encoder.layer.11.output.LayerNorm.bias', 'bert.seg_encoder.layers.1.norm1.bias', 'bert.encoder.encoder.layer.8.intermediate.dense.bias', 'bert.encoder.encoder.layer.11.attention.self.key.bias', 'bert.encoder.encoder.layer.3.attention.output.dense.bias', 'bert.seg_encoder.layers.0.linear2.weight', 'bert.encoder.encoder.layer.10.intermediate.dense.weight', 'bert.encoder.encoder.layer.4.attention.output.LayerNorm.bias', 'bert.encoder.encoder.layer.3.attention.self.query.weight', 'bert.encoder.encoder.layer.3.output.dense.weight', 'bert.seg_encoder.norm.weight', 'bert.encoder.encoder.layer.8.output.dense.bias', 'bert.seg_encoder.layers.1.linear2.weight', 'bert.encoder.embeddings.position_ids', 'bert.encoder.encoder.layer.8.attention.output.LayerNorm.weight', 'bert.encoder.encoder.layer.3.output.LayerNorm.weight', 'bert.encoder.encoder.layer.4.attention.self.key.weight', 'bert.encoder.encoder.layer.3.attention.self.query.bias', 'bert.encoder.encoder.layer.1.output.LayerNorm.bias', 'bert.encoder.encoder.layer.10.attention.self.key.weight', 'bert.encoder.encoder.layer.2.output.LayerNorm.weight', 'bert.encoder.encoder.layer.1.attention.self.query.bias', 'bert.encoder.encoder.layer.10.attention.self.value.weight', 'bert.encoder.encoder.layer.8.output.LayerNorm.bias', 'bert.encoder.encoder.layer.0.intermediate.dense.weight', 'bert.encoder.encoder.layer.6.output.LayerNorm.bias', 'bert.encoder.encoder.layer.7.output.LayerNorm.weight', 'bert.encoder.encoder.layer.5.output.dense.bias', 'bert.encoder.encoder.layer.9.attention.output.LayerNorm.bias', 'bert.encoder.encoder.layer.2.attention.output.LayerNorm.weight', 'bert.encoder.encoder.layer.1.intermediate.dense.weight', 'bert.encoder.encoder.layer.9.attention.self.key.weight', 'bert.encoder.encoder.layer.11.attention.output.dense.weight', 'bert.encoder.encoder.layer.9.output.LayerNorm.weight', 'bert.encoder.encoder.layer.8.attention.self.key.bias', 'bert.encoder.encoder.layer.4.attention.self.value.weight', 'bert.encoder.encoder.layer.3.attention.self.value.bias', 'bert.encoder.encoder.layer.9.attention.self.value.bias', 'bert.encoder.encoder.layer.9.attention.self.key.bias', 'bert.encoder.encoder.layer.0.attention.self.value.weight', 'bert.encoder.encoder.layer.7.output.dense.weight', 'bert.encoder.encoder.layer.7.attention.self.query.weight', 'bert.seg_encoder.layers.0.self_attn.in_proj_weight', 'bert.encoder.encoder.layer.6.attention.self.value.weight', 'bert.encoder.encoder.layer.11.attention.self.query.bias', 'bert.seg_encoder.layers.0.self_attn.out_proj.weight', 'bert.encoder.encoder.layer.2.output.dense.bias', 'bert.seg_encoder.layers.1.self_attn.in_proj_weight', 'bert.seg_encoder.layers.1.linear2.bias', 'bert.encoder.encoder.layer.0.attention.self.key.bias', 'bert.encoder.encoder.layer.7.attention.output.dense.bias', 'bert.encoder.encoder.layer.9.attention.output.dense.bias', 'bert.encoder.encoder.layer.4.attention.self.value.bias', 'bert.seg_encoder.layers.0.self_attn.in_proj_bias', 'bert.encoder.encoder.layer.6.attention.self.query.bias', 'bert.encoder.embeddings.position_embeddings.weight', 'bert.encoder.encoder.layer.8.attention.output.LayerNorm.bias', 'bert.encoder.encoder.layer.3.intermediate.dense.bias', 'bert.encoder.pooler.dense.weight', 'bert.encoder.encoder.layer.2.output.LayerNorm.bias', 'bert.encoder.encoder.layer.9.intermediate.dense.weight', 'bert.encoder.encoder.layer.1.attention.output.LayerNorm.bias', 'bert.encoder.encoder.layer.5.attention.self.query.weight', 'bert.encoder.encoder.layer.1.attention.output.dense.weight', 'bert.encoder.encoder.layer.1.output.dense.weight', 'bert.encoder.encoder.layer.0.output.dense.weight', 'bert.encoder.encoder.layer.3.attention.self.key.weight', 'bert.encoder.encoder.layer.2.attention.self.value.weight', 'bert.encoder.encoder.layer.5.attention.self.query.bias', 'bert.encoder.encoder.layer.8.output.LayerNorm.weight', 'bert.encoder.encoder.layer.9.attention.self.query.bias', 'bert.encoder.encoder.layer.1.attention.self.key.bias', 'bert.encoder.encoder.layer.7.attention.self.key.bias', 'bert.encoder.encoder.layer.11.attention.self.value.bias', 'bert.encoder.encoder.layer.1.attention.self.query.weight', 'bert.encoder.encoder.layer.1.attention.output.dense.bias', 'bert.encoder.encoder.layer.9.attention.self.query.weight', 'bert.encoder.encoder.layer.5.output.dense.weight', 'bert.encoder.encoder.layer.4.attention.output.LayerNorm.weight', 'bert.encoder.encoder.layer.1.attention.self.value.bias', 'bert.seg_encoder.layers.1.self_attn.in_proj_bias', 'bert.encoder.encoder.layer.3.attention.self.value.weight', 'bert.encoder.encoder.layer.11.output.dense.weight', 'bert.encoder.encoder.layer.8.attention.output.dense.weight', 'bert.encoder.encoder.layer.0.output.LayerNorm.bias', 'bert.seg_encoder.layers.0.linear1.bias', 'bert.encoder.encoder.layer.4.attention.output.dense.weight', 'bert.encoder.encoder.layer.10.output.LayerNorm.weight', 'bert.encoder.encoder.layer.0.attention.self.query.weight', 'bert.encoder.encoder.layer.10.output.LayerNorm.bias', 'bert.encoder.embeddings.LayerNorm.bias', 'bert.encoder.encoder.layer.4.intermediate.dense.weight', 'bert.encoder.encoder.layer.6.attention.self.key.bias', 'bert.encoder.encoder.layer.6.attention.output.dense.weight', 'bert.encoder.encoder.layer.8.attention.self.value.bias', 'bert.encoder.encoder.layer.11.output.dense.bias', 'bert.encoder.encoder.layer.11.intermediate.dense.bias', 'bert.seg_encoder.norm.bias', 'bert.encoder.encoder.layer.1.attention.self.value.weight', 'bert.encoder.encoder.layer.0.output.LayerNorm.weight', 'bert.encoder.encoder.layer.7.attention.self.query.bias', 'bert.encoder.encoder.layer.10.attention.self.query.weight', 'bert.encoder.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.seg_encoder.layers.1.linear1.weight', 'bert.encoder.encoder.layer.0.attention.self.value.bias', 'bert.encoder.encoder.layer.3.attention.self.key.bias', 'bert.encoder.encoder.layer.11.attention.output.dense.bias', 'bert.encoder.encoder.layer.2.attention.output.dense.weight', 'bert.encoder.encoder.layer.7.attention.self.key.weight', 'bert.encoder.encoder.layer.6.output.LayerNorm.weight', 'bert.encoder.encoder.layer.10.attention.output.LayerNorm.weight', 'bert.encoder.encoder.layer.1.attention.output.LayerNorm.weight', 'bert.encoder.encoder.layer.4.output.dense.weight', 'bert.encoder.encoder.layer.7.attention.self.value.bias', 'bert.encoder.encoder.layer.7.output.dense.bias', 'bert.encoder.encoder.layer.5.attention.self.value.bias', 'bert.encoder.encoder.layer.8.attention.output.dense.bias', 'bert.encoder.encoder.layer.10.intermediate.dense.bias', 'bert.seg_encoder.layers.0.linear2.bias', 'bert.seg_encoder.layers.0.linear1.weight', 'bert.encoder.encoder.layer.11.attention.self.query.weight', 'bert.encoder.encoder.layer.2.attention.self.query.weight', 'bert.encoder.encoder.layer.5.attention.self.value.weight', 'bert.encoder.encoder.layer.4.output.dense.bias', 'bert.encoder.encoder.layer.6.attention.output.dense.bias', 'bert.encoder.encoder.layer.5.attention.output.LayerNorm.bias', 'bert.encoder.encoder.layer.9.intermediate.dense.bias', 'bert.encoder.encoder.layer.0.attention.output.LayerNorm.bias', 'bert.encoder.encoder.layer.11.attention.self.value.weight', 'bert.encoder.encoder.layer.5.attention.self.key.bias', 'bert.encoder.encoder.layer.11.attention.self.key.weight', 'bert.encoder.encoder.layer.2.intermediate.dense.weight', 'bert.encoder.encoder.layer.1.output.dense.bias', 'bert.encoder.encoder.layer.2.attention.output.LayerNorm.bias', 'bert.encoder.encoder.layer.7.attention.output.dense.weight', 'bert.encoder.encoder.layer.10.attention.output.dense.weight', 'bert.encoder.encoder.layer.6.attention.self.key.weight', 'bert.encoder.encoder.layer.2.attention.output.dense.bias', 'bert.encoder.encoder.layer.7.intermediate.dense.weight', 'bert.encoder.encoder.layer.3.attention.output.LayerNorm.bias', 'bert.encoder.encoder.layer.2.attention.self.key.weight', 'bert.encoder.pooler.dense.bias', 'bert.encoder.encoder.layer.2.attention.self.query.bias', 'bert.encoder.encoder.layer.0.output.dense.bias', 'bert.encoder.encoder.layer.6.attention.self.query.weight', 'bert.encoder.encoder.layer.7.intermediate.dense.bias', 'bert.encoder.encoder.layer.0.attention.output.dense.bias', 'bert.encoder.encoder.layer.10.attention.output.LayerNorm.bias', 'bert.encoder.encoder.layer.0.attention.self.query.bias', 'bert.encoder.encoder.layer.5.output.LayerNorm.weight', 'bert.encoder.encoder.layer.9.attention.output.dense.weight', 'bert.encoder.encoder.layer.4.intermediate.dense.bias', 'bert.encoder.encoder.layer.5.output.LayerNorm.bias', 'bert.encoder.encoder.layer.8.attention.self.query.weight', 'bert.encoder.encoder.layer.0.intermediate.dense.bias', 'bert.encoder.encoder.layer.8.output.dense.weight', 'bert.encoder.encoder.layer.10.attention.self.value.bias', 'bert.encoder.encoder.layer.3.attention.output.dense.weight', 'bert.seg_encoder.layers.0.norm2.bias', 'bert.encoder.encoder.layer.9.attention.self.value.weight', 'bert.encoder.encoder.layer.8.attention.self.key.weight', 'bert.encoder.encoder.layer.11.attention.output.LayerNorm.weight', 'bert.encoder.encoder.layer.0.attention.output.dense.weight', 'bert.encoder.encoder.layer.3.intermediate.dense.weight', 'bert.encoder.encoder.layer.9.output.dense.bias', 'bert.encoder.encoder.layer.9.attention.output.LayerNorm.weight', 'bert.encoder.encoder.layer.10.attention.output.dense.bias', 'bert.encoder.encoder.layer.4.output.LayerNorm.weight', 'bert.encoder.encoder.layer.1.output.LayerNorm.weight', 'bert.encoder.encoder.layer.6.output.dense.bias', 'bert.encoder.encoder.layer.1.attention.self.key.weight', 'bert.encoder.encoder.layer.5.attention.output.dense.bias', 'bert.seg_pos_embeddings.weight', 'bert.encoder.encoder.layer.2.attention.self.key.bias', 'bert.encoder.encoder.layer.4.attention.output.dense.bias', 'bert.encoder.encoder.layer.3.attention.output.LayerNorm.weight', 'bert.encoder.encoder.layer.6.intermediate.dense.bias', 'bert.encoder.encoder.layer.3.output.LayerNorm.bias', 'bert.encoder.encoder.layer.2.intermediate.dense.bias', 'bert.encoder.encoder.layer.3.output.dense.bias', 'bert.encoder.encoder.layer.10.attention.self.key.bias', 'bert.encoder.encoder.layer.1.intermediate.dense.bias', 'bert.encoder.encoder.layer.9.output.LayerNorm.bias', 'bert.seg_encoder.layers.0.norm2.weight', 'bert.encoder.encoder.layer.4.output.LayerNorm.bias', 'bert.encoder.encoder.layer.4.attention.self.query.bias', 'bert.encoder.encoder.layer.5.attention.self.key.weight', 'bert.encoder.encoder.layer.6.attention.self.value.bias', 'bert.seg_encoder.layers.1.linear1.bias', 'bert.encoder.encoder.layer.5.attention.output.LayerNorm.weight', 'bert.encoder.encoder.layer.8.intermediate.dense.weight', 'bert.encoder.encoder.layer.2.attention.self.value.bias', 'bert.encoder.encoder.layer.4.attention.self.key.bias', 'bert.encoder.encoder.layer.6.intermediate.dense.weight', 'bert.encoder.encoder.layer.7.attention.output.LayerNorm.bias']
This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[WARNING|modeling_utils.py:1512] 2022-03-28 18:01:33,192 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at /home/X/X/lex-glue/seed_1 and are newly initialized: ['bert.encoder.layer.0.output.LayerNorm.weight', 'bert.encoder.layer.4.output.dense.weight', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.11.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.self.key.bias', 'bert.encoder.layer.4.output.LayerNorm.weight', 'bert.encoder.layer.11.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.self.value.bias', 'bert.encoder.layer.8.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.self.key.bias', 'bert.encoder.layer.6.attention.self.query.weight', 'bert.encoder.layer.3.attention.self.key.weight', 'bert.encoder.layer.2.output.dense.bias', 'bert.encoder.layer.11.output.dense.weight', 'bert.encoder.layer.6.output.dense.bias', 'bert.encoder.layer.6.attention.output.LayerNorm.weight', 'bert.encoder.layer.11.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.attention.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.self.query.bias', 'bert.encoder.layer.6.attention.self.key.bias', 'bert.encoder.layer.3.attention.self.key.bias', 'bert.encoder.layer.6.attention.output.LayerNorm.bias', 'bert.encoder.layer.11.attention.self.key.bias', 'bert.encoder.layer.10.attention.output.LayerNorm.bias', 'bert.encoder.layer.7.intermediate.dense.bias', 'bert.encoder.layer.2.intermediate.dense.weight', 'bert.encoder.layer.3.output.LayerNorm.weight', 'bert.encoder.layer.3.output.dense.weight', 'bert.encoder.layer.4.intermediate.dense.weight', 'bert.encoder.layer.0.intermediate.dense.bias', 'bert.encoder.layer.9.attention.output.dense.weight', 'bert.encoder.layer.4.attention.output.LayerNorm.weight', 'bert.encoder.layer.4.attention.output.dense.weight', 'bert.encoder.layer.2.attention.output.dense.bias', 'bert.encoder.layer.3.intermediate.dense.bias', 'bert.encoder.layer.0.attention.self.key.weight', 'bert.encoder.layer.6.output.LayerNorm.bias', 'bert.encoder.layer.4.intermediate.dense.bias', 'bert.encoder.layer.5.output.LayerNorm.bias', 'bert.encoder.layer.7.attention.self.value.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.weight', 'bert.encoder.layer.5.attention.output.dense.weight', 'bert.encoder.layer.8.attention.self.value.weight', 'bert.encoder.layer.5.output.dense.weight', 'bert.encoder.layer.4.attention.output.LayerNorm.bias', 'bert.encoder.layer.8.attention.self.query.bias', 'bert.encoder.layer.9.output.dense.weight', 'bert.encoder.layer.1.attention.output.dense.weight', 'bert.encoder.layer.6.intermediate.dense.bias', 'bert.encoder.layer.0.attention.output.dense.weight', 'bert.encoder.layer.10.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.output.dense.bias', 'bert.encoder.layer.5.output.LayerNorm.weight', 'bert.encoder.layer.1.intermediate.dense.weight', 'bert.encoder.layer.5.attention.output.LayerNorm.bias', 'bert.encoder.layer.9.intermediate.dense.weight', 'bert.encoder.layer.2.attention.output.dense.weight', 'bert.encoder.layer.7.attention.self.key.bias', 'bert.encoder.layer.1.attention.self.value.weight', 'bert.encoder.layer.11.attention.self.key.weight', 'bert.encoder.layer.11.attention.self.value.bias', 'bert.encoder.layer.8.attention.self.value.bias', 'bert.encoder.layer.9.intermediate.dense.bias', 'bert.encoder.layer.10.output.dense.bias', 'bert.encoder.layer.7.attention.self.query.weight', 'bert.encoder.layer.0.output.LayerNorm.bias', 'bert.encoder.layer.11.intermediate.dense.weight', 'bert.encoder.layer.5.intermediate.dense.weight', 'bert.encoder.layer.5.attention.self.value.bias', 'bert.encoder.layer.11.attention.output.dense.bias', 'bert.encoder.layer.5.intermediate.dense.bias', 'bert.encoder.layer.3.output.LayerNorm.bias', 'bert.encoder.layer.6.attention.self.value.weight', 'bert.encoder.layer.10.attention.output.dense.weight', 'bert.encoder.layer.2.output.dense.weight', 'bert.encoder.layer.2.attention.self.query.bias', 'bert.encoder.layer.7.output.LayerNorm.weight', 'bert.encoder.layer.2.attention.self.query.weight', 'bert.encoder.layer.0.output.dense.weight', 'bert.encoder.layer.11.attention.self.query.weight', 'bert.encoder.layer.11.output.dense.bias', 'bert.encoder.layer.4.attention.self.query.bias', 'bert.encoder.layer.8.output.dense.weight', 'bert.encoder.layer.7.attention.output.dense.weight', 'bert.pooler.dense.weight', 'bert.encoder.layer.6.attention.output.dense.bias', 'bert.encoder.layer.1.intermediate.dense.bias', 'bert.encoder.layer.5.attention.self.query.weight', 'bert.encoder.layer.1.output.dense.weight', 'bert.encoder.layer.7.attention.self.key.weight', 'bert.encoder.layer.4.attention.self.value.bias', 'bert.encoder.layer.0.attention.output.dense.bias', 'bert.encoder.layer.8.attention.self.query.weight', 'bert.encoder.layer.2.attention.self.value.bias', 'bert.encoder.layer.6.attention.self.key.weight', 'bert.encoder.layer.10.intermediate.dense.bias', 'bert.encoder.layer.2.attention.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.self.query.weight', 'bert.encoder.layer.1.attention.self.key.bias', 'bert.encoder.layer.1.output.LayerNorm.bias', 'bert.encoder.layer.9.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.output.dense.bias', 'bert.encoder.layer.6.attention.self.value.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.bias', 'bert.encoder.layer.1.attention.self.query.weight', 'bert.encoder.layer.9.attention.self.value.weight', 'bert.encoder.layer.11.intermediate.dense.bias', 'bert.encoder.layer.8.intermediate.dense.weight', 'bert.encoder.layer.5.attention.output.dense.bias', 'bert.encoder.layer.0.intermediate.dense.weight', 'bert.encoder.layer.10.attention.output.dense.bias', 'bert.encoder.layer.4.attention.self.query.weight', 'bert.encoder.layer.10.output.dense.weight', 'bert.encoder.layer.1.attention.self.query.bias', 'bert.encoder.layer.5.attention.self.value.weight', 'bert.embeddings.position_embeddings.weight', 'bert.encoder.layer.7.output.LayerNorm.bias', 'bert.encoder.layer.8.intermediate.dense.bias', 'bert.encoder.layer.10.attention.self.key.weight', 'bert.encoder.layer.0.output.dense.bias', 'bert.encoder.layer.8.output.LayerNorm.bias', 'bert.encoder.layer.11.attention.self.value.weight', 'bert.encoder.layer.5.attention.output.LayerNorm.weight', 'bert.encoder.layer.4.attention.self.key.bias', 'bert.encoder.layer.10.intermediate.dense.weight', 'bert.encoder.layer.0.attention.self.value.weight', 'bert.encoder.layer.2.attention.self.key.bias', 'bert.encoder.layer.4.attention.self.value.weight', 'bert.encoder.layer.4.attention.output.dense.bias', 'bert.encoder.layer.10.attention.self.query.bias', 'bert.encoder.layer.1.attention.self.key.weight', 'bert.encoder.layer.8.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.output.dense.bias', 'bert.encoder.layer.1.attention.output.LayerNorm.weight', 'bert.encoder.layer.4.output.LayerNorm.bias', 'bert.encoder.layer.5.attention.self.query.bias', 'bert.encoder.layer.1.attention.output.dense.bias', 'bert.encoder.layer.3.attention.self.value.bias', 'bert.encoder.layer.9.attention.output.dense.bias', 'bert.encoder.layer.8.attention.self.key.weight', 'bert.encoder.layer.2.attention.output.LayerNorm.weight', 'bert.encoder.layer.9.attention.self.query.weight', 'bert.encoder.layer.8.attention.self.key.bias', 'bert.encoder.layer.9.output.LayerNorm.weight', 'bert.embeddings.token_type_embeddings.weight', 'bert.encoder.layer.4.output.dense.bias', 'bert.encoder.layer.6.intermediate.dense.weight', 'bert.encoder.layer.6.output.dense.weight', 'bert.encoder.layer.8.attention.output.LayerNorm.bias', 'bert.encoder.layer.8.output.dense.bias', 'bert.encoder.layer.9.attention.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.self.value.bias', 'bert.encoder.layer.11.attention.self.query.bias', 'bert.encoder.layer.10.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.self.query.weight', 'bert.encoder.layer.5.attention.self.key.weight', 'bert.encoder.layer.3.intermediate.dense.weight', 'bert.encoder.layer.11.attention.output.LayerNorm.bias', 'bert.encoder.layer.9.output.dense.bias', 'bert.encoder.layer.7.attention.output.dense.bias', 'bert.encoder.layer.8.attention.output.dense.weight', 'bert.encoder.layer.2.attention.self.key.weight', 'bert.encoder.layer.3.attention.self.query.bias', 'bert.encoder.layer.1.attention.self.value.bias', 'bert.encoder.layer.5.output.dense.bias', 'bert.encoder.layer.7.output.dense.bias', 'bert.encoder.layer.2.intermediate.dense.bias', 'bert.embeddings.word_embeddings.weight', 'bert.encoder.layer.9.attention.output.LayerNorm.weight', 'bert.encoder.layer.7.intermediate.dense.weight', 'bert.encoder.layer.0.attention.output.LayerNorm.bias', 'bert.encoder.layer.6.attention.self.query.bias', 'bert.encoder.layer.2.output.LayerNorm.bias', 'bert.embeddings.LayerNorm.weight', 'bert.encoder.layer.6.attention.output.dense.weight', 'bert.encoder.layer.9.attention.self.query.bias', 'bert.encoder.layer.10.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.self.value.weight', 'bert.encoder.layer.2.attention.self.value.weight', 'bert.encoder.layer.3.attention.output.dense.weight', 'bert.encoder.layer.9.attention.self.value.bias', 'bert.encoder.layer.3.attention.self.query.weight', 'bert.encoder.layer.7.output.dense.weight', 'bert.encoder.layer.1.attention.output.LayerNorm.bias', 'bert.encoder.layer.1.output.LayerNorm.weight', 'bert.encoder.layer.8.attention.output.dense.bias', 'bert.encoder.layer.7.attention.self.query.bias', 'bert.encoder.layer.2.output.LayerNorm.weight', 'bert.encoder.layer.4.attention.self.key.weight', 'bert.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.encoder.layer.5.attention.self.key.bias', 'bert.encoder.layer.7.attention.self.value.weight', 'bert.encoder.layer.11.attention.output.dense.weight', 'bert.encoder.layer.3.attention.self.value.weight', 'bert.encoder.layer.9.attention.self.key.weight', 'bert.encoder.layer.9.attention.self.key.bias', 'bert.encoder.layer.6.output.LayerNorm.weight', 'bert.pooler.dense.bias'] <
Hello! Thank you for starting this project.
I have a small question about the hierbert model (HierarchicalBert).
You use it to:
replace flat BERT encoder with hierarchical BERT encoder.
The hierarchy isnt about the labels/classes (classes could belong to a hierarchical tree), right? The hierarchy you mention is related to the text/token segments in a document, i.e you consider that a document is not only a big plaintext but a list of text segments and you give that information to the model?
Thank you for any information.
You probably have realised that the results with TF-IDF + SVM approach for SCOTUS are pretty high, well, I think they have a bias. I think that the testing metrics are being computed after a retraining of the Pipeline with both training and validation sets combined, while the other Language Models are only fine-tuned with the training set. This is because sklearn.model_selection.GridSearchCV has the parameter "refit" equal True as default, which ends up in a biased comparison.
Training with only the training set for the best hyperparameters found in the validation set the microf1 score is closer to 74.0 and the macrof1 to 64.4
Reference:
Line 84 in 5109aeb
Hello all, thank you for releasing this repository!
I am currently working on reproducing some of the results in this repository. In the readme, benchmarked results are presented for all tasks including ECtHR Task A
and ECtHR Task B
. However, the shell script run_ecthr.sh
only encodes one task; namely ecthr_a
:
Line 6 in dfee272
Is there a reason for this, or is it implied to run this script another time after changing the TASK
variable to ecthr_b
?
Hi, thanks for the awesome repo!
I have encountered an issue when running the scripts for scotus.
[INFO|trainer.py:1164] 2022-05-31 13:09:08,068 >> ***** Running training *****
[INFO|trainer.py:1165] 2022-05-31 13:09:08,068 >> Num examples = 100
[INFO|trainer.py:1166] 2022-05-31 13:09:08,068 >> Num Epochs = 10
[INFO|trainer.py:1167] 2022-05-31 13:09:08,068 >> Instantaneous batch size per device = 8
[INFO|trainer.py:1168] 2022-05-31 13:09:08,068 >> Total train batch size (w. parallel, distributed & accumulation) = 64
[INFO|trainer.py:1169] 2022-05-31 13:09:08,068 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1170] 2022-05-31 13:09:08,068 >> Total optimization steps = 20
0%| | 0/20 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/cooelf/lex-glue-main/scotus.py", line 490, in
main()
File "/home/cooelf/lex-glue-main/scotus.py", line 439, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/cooelf/.local/lib/python3.7/site-packages/transformers/trainer.py", line 1254, in train
for step, inputs in enumerate(epoch_iterator):
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/home/cooelf/.local/lib/python3.7/site-packages/transformers/data/data_collator.py", line 81, in default_data_collator
batch[k] = torch.tensor([f[k] for f in features])
ValueError: expected sequence of length 64 at dim 2 (got 128)
0%| | 0/20 [00:00<?, ?it/s]
Process finished with exit code 1
It seems to be some problem about the data processing. I have checked the dimension of the features but failed to find anything strange.
Could you give some hints to solve it?
Thanks!
FYI: reglab/casehold#2
The bug with the fast tokenizer should be fixed now, so it is possible to use it.
lex-glue/experiments/case_hold.py
Lines 161 to 166 in dfee272
Hi,
I tried run_ecthr.sh, but it failed to load dataset.
The error is from line 236 in experiments/ecthr.py
train_dataset = load_dataset("lex_glue", name=data_args.task, split="train", data_dir='data', cache_dir=model_args.cache_dir)
Error info:
Traceback (most recent call last):
File "main_ecthr.py", line 505, in <module>
main()
File "main_ecthr.py", line 236, in main
train_dataset = load_dataset("lex_glue", name=data_args.task, split="train", data_dir='data', cache_dir=model_args.cache_dir)
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/datasets/load.py", line 1723, in load_dataset
builder_instance = load_dataset_builder(
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/datasets/load.py", line 1500, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/datasets/load.py", line 1168, in dataset_module_factory
return LocalDatasetModuleFactoryWithoutScript(
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/datasets/load.py", line 691, in get_module
else get_data_patterns_locally(base_path)
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/datasets/data_files.py", line 451, in get_data_patterns_locally
raise FileNotFoundError(f"The directory at {base_path} doesn't contain any data file") from None
FileNotFoundError: The directory at lex_glue/data doesn't contain any data file
If I delete data_dir='data',
the error will turn to be:
08/02/2022 11:48:43 - INFO - datasets.data_files - Some files matched the pattern 'lex_glue/**[-._ 0-9/]train[-._ 0-9]*' at /workspace/MaxPlain/lex_glue but don't have valid data file extensions: [PosixPath('/workspace/MaxPlain/lex_glue/statistics/report_train_time.py')]
Traceback (most recent call last):
File "main_ecthr.py", line 505, in <module>
main()
File "main_ecthr.py", line 236, in main
train_dataset = load_dataset("lex_glue", name=data_args.task, split="train", cache_dir=model_args.cache_dir)
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/datasets/load.py", line 1723, in load_dataset
builder_instance = load_dataset_builder(
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/datasets/load.py", line 1500, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/datasets/load.py", line 1168, in dataset_module_factory
return LocalDatasetModuleFactoryWithoutScript(
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/datasets/load.py", line 695, in get_module
data_files = DataFilesDict.from_local_or_remote(
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/datasets/data_files.py", line 786, in from_local_or_remote
DataFilesList.from_local_or_remote(
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/datasets/data_files.py", line 754, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/datasets/data_files.py", line 359, in resolve_patterns_locally_or_by_urls
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to resolve any data file that matches '['**[-._ 0-9/]train[-._ 0-9]*', 'train[-._ 0-9]*', '**[-._ 0-9/]training[-._ 0-9]*', 'training[-._ 0-9]*']' at /workspace/MaxPlain/lex_glue with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'GRIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG', 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF', 'EMF', 'XBM', 'XPM', 'zip']
Is there anything wrong in loading dataset?
When I run the run_ecthr.sh script in experiments folder
Such error occurs:
Traceback (most recent call last):
File "main_ecthr.py", line 505, in <module>
main()
File "main_ecthr.py", line 454, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/transformers/trainer.py", line 1498, in train
return inner_training_loop(
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/transformers/trainer.py", line 1832, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/transformers/trainer.py", line 2038, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/transformers/trainer.py", line 2758, in evaluate
output = eval_loop(
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/transformers/trainer.py", line 2936, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/transformers/trainer.py", line 3177, in prediction_step
loss, outputs = self.compute_loss(model, inputs, return_outputs=True)
File "/workspace/MaxPlain/lexglue/experiments/trainer.py", line 8, in compute_loss
outputs = model(**inputs)
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 1556, in forward
outputs = self.bert(
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/workspace/MaxPlain/lexglue/models/hierbert.py", line 100, in forward
seg_encoder_outputs = self.seg_encoder(encoder_outputs)
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/torch/nn/modules/transformer.py", line 238, in forward
output = mod(output, src_mask=mask, src_key_padding_mask=src_key_padding_mask)
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/OmniXAI/lib/python3.8/site-packages/torch/nn/modules/transformer.py", line 437, in forward
return torch._transformer_encoder_layer_fwd(
RuntimeError: expected scalar type Half but found Float
I try to debug it. And find that it maybe due to the trainer fail to put model to dtype=torch.fp16
.
I also tried the evaluation. It will fail and report the same error.
# Evaluation
if training_args.do_eval:
logger.info("*** Evaluate ***")
metrics = trainer.evaluate(eval_dataset=eval_dataset)
max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)
metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset))
trainer.log_metrics("eval", metrics)
trainer.save_metrics("eval", metrics)
After I remove --fp16 --fp16_full_eval
in the run_ecthr.sh, it works as expected.
Hi @iliaschalkidis,
As mentioned previously, I have been running some experiments on LexGLUE benchmarks and will soon be finishing with the runs for legal-bert-small
. It is mentioned that it would be useful to report the results of this smaller model.
Should I just post the results here, or would you prefer another medium?
Hi,
I face with a problem while loading case_hold dataset. It reports Key Error: 'question'. Could you please solve it or give me some advice on loading that dataset?
Thank you very much!
dataset = load_dataset("lex_glue", "case_hold", revision="1.15.1")
Downloading and preparing dataset lex_glue/case_hold (download: 29.01 MiB, generated: 255.06 MiB, post-processed: Unknown size, total: 284.08 MiB) to C:\XXXXX
Traceback (most recent call last):
File "", line 1, in
File "D:\Anaconda3\lib\site-packages\datasets\load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "D:\Anaconda3\lib\site-packages\datasets\builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "D:\Anaconda3\lib\site-packages\datasets\builder.py", line 697, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "D:\Anaconda3\lib\site-packages\datasets\builder.py", line 1103, in _prepare_split
example = self.info.features.encode_example(record)
File "D:\Anaconda3\lib\site-packages\datasets\features\features.py", line 1033, in encode_example
return encode_nested_example(self, example)
File "D:\Anaconda3\lib\site-packages\datasets\features\features.py", line 808, in encode_nested_example
return {
File "D:\Anaconda3\lib\site-packages\datasets\features\features.py", line 808, in
return {
File "D:\Anaconda3\lib\site-packages\datasets\utils\py_utils.py", line 108, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "D:\Anaconda3\lib\site-packages\datasets\utils\py_utils.py", line 108, in
yield key, tuple(d[key] for d in dicts)
KeyError: 'question'
I'm running the following script: CUDA_VISIBLE_DEVICES=4 python3 -i /net/scratch/jasonhu/legal_dec-sum/lex-glue/experiments/ecthr.py --model_name_or_path 'bert-base-uncased' --do_lower_case 'True' --task 'ecthr_a' --output_dir logs/'ecthr_a'/'bert-base-uncased'/seed_1 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size 2 --per_device_eval_batch_size 2 --seed 1 --gradient_accumulation_steps 4 --eval_accumulation_steps 4
And then the following bug occurs:
Tried many ways to solve it but failed, any idea how to tackle this problem? Thanks!
The SCOTUS dataset available as part of the LexGlue corpus mentions 14 classes within the dataset. Upon verification over the HuggingFace SCOTUS dataset, we only get 13 classes through this method.
from datasets import load_dataset # !pip install datasets
import numpy as np
scotus = load_dataset('lex_glue', 'scotus')
labels = list(scotus['train']['label'])
classes = np.unique(labels)
print(classes, len(classes))
scotus = load_dataset('lex_glue', 'scotus')
labels = list(scotus['test']['label'])
classes = np.unique(labels)
print(classes, len(classes))
The results display on 13 unique classes instead of 14, as shown below.
Is there an issue in which we're extracting the data, if so we'd greatly appreciate any help.
Hi, my reproduced results for EUR-LEX are quite far from the reported ones. Could you provide the hyper-parameters of DeBERTa for EUR-LEX? And which version of DeBERTa is used, V2/V3, Base/Large?
Looking forward to your reply. Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.