GithubHelp home page GithubHelp logo

morningmoni / taxorl Goto Github PK

View Code? Open in Web Editor NEW
65.0 6.0 13.0 3.01 MB

Code for paper "End-to-End Reinforcement Learning for Automatic Taxonomy Induction", ACL 2018

License: MIT License

Python 96.41% Shell 3.59%
reinforcement-learning dynet taxonomy taxonomy-construction taxonomy-induction wordnet

taxorl's Introduction

morningmoni

morningmoni

Connect with me:

yuning_pro morningmoni yuning.mao.50 morningmoni

taxorl's People

Contributors

morningmoni avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

taxorl's Issues

Use the saved model for inference on a new set of terms

Hi,
i builded everything from scratch for Italian language, ran the train_RL.py using the italian version of SemEval-2016 datasets for training, validation and test and saved the model. I can't figure out how to use the saved model for taxonomy induction on my own vocabulary. Thanks.

Ferequency files

I am applying your method to my own dataset, so I need to generate my own frequency files.
How do you generate the twodatasets_freq_w.pkl & other frequency files?

corpus

The datasets in corpus are used in knowledge_resource.py, like _term_to_id.db, _id_to_term.db, path_to_id.db, _id_to_path.db, _l2r.db etc. I know from your source code these db files are created by yourself. However, I can’t understand what input is in the shell file in the corpus folder. For example, the parameters like $1, $2, $3, $4 are input from the command line, but I don’t What files do these parameters represent? I will be grateful you if you can provide your created db file. Thanks very much.

no generalization

Hi,

I am trying to run training using the features from the pickle file. The performance on the training set keeps going up, however there seems to be no generalization (the performance on the evaluation set remains constant). I was wondering if you used different hyperparameter values or training options than the default ones to get your results?

pickled_data missing

Hi,
Is there any place to download "pickled_data/lower2original.pkl" data mentioned in the utils_common.py file.

Could not find SemEval tree file

As you wrote in train_RL.py:

        trees_semeval = read_edge_files("../datasets/SemEval-2016/original/",
                                        given_root=True, filter_root=args.filter_root, allow_up=False)

It seems you are trying to read tree.ptb from a directory. So should there be a tree file for semEval and how could I find/generate it?

What is the training preprocess data for WordNet only/WordNet +SemEval

It makes me really confused when I try to do with the semEval .
For wordNet only, I used valname = dev_wnbo_hyper, model_prefix_file = 3in1_subseqFeat
For wordNet+SemEval, I used valname = dev_twodatasets, model_prefix_file = twodatasets_subseqFeat.
Am I doing right?
By the way, why is dev_twodatasets so big compared to dev_wnbo_hyper. And how do you generate it?

How to train with GPU

It seems the model_RL.py is based on cpu training? How could I train with GPU?
I commented dyparams.set_cpu_mode() and add the follow codes before import _dynet

import dynet_config
# Declare GPU as the default device type
dynet_config.set_gpu()

But the training speed is still slow. Please correct me if I do it wrong.

corpus

Hello, morningmoni, does the datasets in corpus need to create with ourselves? or can you open this datasets? Thank you very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.