The source code of paper "SelfORE: Self-supervised Relational Feature Learning for Open Relation Extraction"
# choose the correct running configurations in config.yaml file.
python run.py
The source code of paper "SelfORE: Self-supervised Relational Feature Learning for Open Relation Extraction"
The source code of paper "SelfORE: Self-supervised Relational Feature Learning for Open Relation Extraction"
# choose the correct running configurations in config.yaml file.
python run.py
How can i get train, dev, test data from raw text data? I want run the code but there was no dataset. Thank you.
Hi,
I tried to run the codes, and there is problem at the line 136 in classifier.py.
The problem is :
total_train_loss += loss.item()
AttributeError: 'str' object has no attribute 'item'
Is it related to version of the transformer?
Thanks.
Hi,
we are trying to use your old code, but cannot replicate your results so far - we have trouble in particular with the DEC clustering phase (the algorithm converges with one cluster when KL is minimized).
In particular, could you tell which (if any) among the two lines is the correct one (and for which dataset):
SelfORE/adaptive_clustering.py
Lines 218 to 219 in 6fc1df1
And is the number of training epochs (default 200) correct?
BTW, when are you planning to release the refactored code?
Thanks
Benjamin
I've been trying to run the code but it kept crashing my server. I noticed that the line that crashes it is this:
Line 86 in 6499713
Putting the data and model on GPU doesn't crash the server, and I'm just curious what the reason for the decision may be? Is there a particular reason why you didn't use the GPU For this operation?
Thanks.
The model always exceeds gpu memory even i set batch_size to 1. So I wonder what's the size of your gpu? Thank you
Simon's article mentions 15 types of relationships, but there is no experimental code, so there is no way to know how to select 10 types of relationships in the TRE_x dataset. Can you give some hints or help?
Hi,
I'm trying to run this code on my own dataset whose size is large (40000 sentences) but on executing I'm getting the following error:
Traceback (most recent call last):
File "run.py", line 12, in <module>
main()
File "run.py", line 8, in main
selfore.start()
File "/home/ec2-user/SageMaker/ap140821-model-development/Notebooks/SelfORE/selfore.py", line 42, in start
labels, embs = self.loop()
File "/home/ec2-user/SageMaker/ap140821-model-development/Notebooks/SelfORE/selfore.py", line 30, in loop
bert_embs = self.classifier.get_hidden_state()
File "/home/ec2-user/SageMaker/ap140821-model-development/Notebooks/SelfORE/classifier.py", line 92, in get_hidden_state
outputs = self.model(self.input_ids, self.attention_masks)
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 1344, in forward
return_dict=return_dict,
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 841, in forward
return_dict=return_dict,
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 482, in forward
output_attentions,
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 402, in forward
output_attentions=output_attentions,
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 339, in forward
output_attentions,
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 251, in forward
mixed_value_layer = self.value(hidden_states)
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 91, in forward
return F.linear(input, self.weight, self.bias)
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/functional.py", line 1676, in linear
output = input.matmul(weight.t())
RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 12567183360 bytes. Error code 12 (Cannot allocate memory)
你好,我想请教您关于您代码当中的一些实现细节
在最新的代码当中
1.encoder的实现是: 128*768->relu->400->relu->10,在论文当中是2*768->500->500,这两者之间有什么区别吗?
2.似乎没有重构的预训练过程?
3.bert的特征似乎都是128*768(整个句子的拼接),并不是entity marker对应的2*768的特征?
谢谢!
Hi, can you show how to get these json files from the data-sample.txt?
I am not familiar with open relation extraction. Can you give specific information about data processing?
请问您论文中的Figure1是用什么工具绘制的呢?
Hello, thanks for your contribution.
Can I get these data and code?
Best wishes!
Hi,
I'm very interested in your paper and want to replicate it before testing it on other kinds of content. However, I can't do so since it is missing some of the code while it awaits factorization. Would it be possible to check in the old code as a different branch so that I could start to work with that? Alternatively, do you have an estimated date for the refactoring to be done and the revised code available?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.