deeplearnxmu / nseg Goto Github PK
View Code? Open in Web Editor NEWCode for “Graph based Neural Sentence Ordering” (IJCAI2019)
Code for “Graph based Neural Sentence Ordering” (IJCAI2019)
I and my colleges are trying to replicate this work recently, but we are confusing about the text format. Would you mind release some sample data for us?
Hello, I'm interested in your work. But when I'm running the run.sh, I found this error and it seems that the beam_ix and token_ix in generator.py is float, not int. For example, the beam_ix is tensor([0.8333, 0.1667, 0.5000, 0.0000, 0.3333, 0.6667], device='cuda:0'), I wonder how I can fix it. Thanks
When training on nips dataset, my performance get stuck at around acc of 54%, much less than the result in the paper. I want to now how can I improve it.
Hi,
I have sentences in this order(example):
S1,S2,S3
S4,S1,S5
1.How can I convert the above sentence order to run this algorithm??
2.Can I use this algorithm to predict next sentence??
3.If I give a new sentence S6 how to predict next sentence??
Could you share the script used for pre-processing the data for creating '.eg', '.lower' and 'vocab.pt' files?
'bash run.sh' failed with following error:
Traceback (most recent call last):
File "main.py", line 203, in
DOC.vocab = torch.load(args.vocab)
File "anaconda3/envs/NSEG/lib/python3.6/site-packages/torch/serialization.py", line 387, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "anaconda3/envs/NSEG/lib/python3.6/site-packages/torch/serialization.py", line 574, in _load
result = unpickler.load()
AttributeError: Can't get attribute '_default_unk_index' on <module 'torchtext.vocab' from '/anaconda3/envs/NSEG/lib/python3.6/site-packages/torchtext/vocab.py'>
测试集少了很多条。用这版数据集确实能复现效果,如果用原版的测试集,效果会差很多
nothing happen~
The preprocessed file for entities looks like this as mentioned in the ReadMe:
*.eg: entity1:i-r means entity1 is in the sentence_i and its role is r.
The values for roles (r) in the sample nips entity files are 1/2/3. Could you please clarify what exact role does each of these numbers refer to?
As I understand from the paper, are they labeled as: 1-subject, 2-object, 3-other?
Thanks!
Hi, recently we want to apply your model in a text ordering pipeline. But we found the code in the repo is a simplified version in the original paper called F-graph. Would you mind sharing us the Graph-SE version of code?
Hi
I want to test the model on paragraphs of sentences which are in some random order and want the correct sentence order predicted from the model.
For the NIPS dataset, I assume the test.lower file consists of gold paragraph sentences in correct order while the mdl.devorder file (created after model training) consists of the output from model (predicted values|||truth values).
But the gold paragraph sentence order does not match with the truth values order.
Could you please specify how does the decoding work for test data?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.