Universal Information Extraction, codes for the NeurIPS-2022 paper: Unifying Information Extraction with Latent Adaptive Structure-aware Generative Language Model.
When I ran the ner task in multiple GPUs, the program crashed and logging the error messages as the title like. I try to fix the bug and I just find the call of grad_fn function in lines 567 in the run_struct_post_train.py. But when I add the code state.params.requires_grad_(True) orloss.requires_grad_(True) , the program didn't work as well.
Hope you can give me some guidances,thanks!
The error messages are as follows:
在构建语料库的时候我根据论文中给出的链接想要下载wikipedia,但是进入了如下页面中,在点击下载后转到了github中,下载完文件中只有一个txt文件,在运行的时候会提示我FileNotFoundError: Unable to find '/home/machenghao/LasUIE-master/data/post-training/wikipedia-en/dev.txt',应该是语料库中的东西不全,请问这个是我下载的问题吗,我该怎么获得完整的语料库。希望能得到您的解答,万分感谢!
Traceback (most recent call last):
File "/home/LasUIE/run_finetune.py", line 289, in
model_wrapper = ModelWrapper(**config)
File "/home/LasUIE/run_finetune.py", line 46, in init
self.model = get_model(self.model_type, self.lm_location)
File "/home/LasUIE/run_finetune.py", line 35, in get_model
return model_dict[model_type].from_pretrained(lm_location)
File "/home/anaconda3/envs/lasuie/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2795, in from_pretrained
) = cls._load_pretrained_model(
File "/home/anaconda3/envs/lasuie/lib/python3.8/site-packages/transformers/modeling_utils.py", line 3008, in _load_pretrained_model
raise ValueError(
ValueError: The state dictionary of the model you are trying to load is corrupted. Are you sure it was properly saved?
hello, I ran through the code using the dataset provided in the project, but it seems that the T5 model is not suitable for Chinese. Is there a way for me to run the code using the Chinese dataset