Source code and data for WWW'17 paper CoType: Joint Extraction of Typed Entities and Relations with Knowledge Bases.
Given a text corpus with entity mentions detected and heuristically labeled by distant supervision, this code determine the entity types for each entity mention, and identify relationships between entities and their relation types.
An end-to-end tool (corpus to typed entities/relations) is under development. Please keep track of our updates.
Performance comparison with several distantly-supervised relation extraction systems over KBP 2013 dataset.
Method | Precision | Recall | F1 |
---|---|---|---|
Mintz (our implementation, Mintz et al., 2009) | 0.296 | 0.387 | 0.335 |
LINE + Dist Sup (Tang et al., 2015) | 0.360 | 0.257 | 0.299 |
MultiR (Hoffmann et al., 2011) | 0.325 | 0.278 | 0.301 |
FCM + Dist Sup (Gormley et al., 2015) | 0.151 | 0.498 | 0.300 |
CoType (Ren et al., 2017) | 0.348 | 0.406 | 0.369 |
We will take Ubuntu for example.
- python 2.7
- Python library dependencies
$ pip install pexpect ujson tqdm
- stanford coreNLP 3.7.0 and its python wrapper. Please put the library under `CoType/code/DataProcessor/'.
$ cd code/DataProcessor/
$ git clone [email protected]:stanfordnlp/stanza.git
$ cd stanza
$ pip install -e .
$ wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip
$ unzip stanford-corenlp-full-2016-10-31.zip
- eigen 3.2.5 (already included).
We process (using our data pipeline) three public datasets to our JSON format. We ran Stanford NER on training set to detect entity mentions, and performed distant supervision using DBpediaSpotlight to assign type labels:
- BioInfer: 100k PubMed paper abstracts as training data and 1,530 manually labeled biomedical paper abstracts from BioInfer (Pyysalo et al., 2007) as test data. It consists of 94 relation types and over 2,000 entity types. (Download JSON)
- NYT (Riedel et al., 2011): 1.18M sentences sampled from 294K New York Times news articles. 395 sentences are manually annotated with 24 relation types and 47 entity types. (Download JSON)
- Wiki-KBP: the training corpus contains 1.5M sentences sampled from 780k Wikipedia articles (Ling & Weld, 2012) plus ~7,000 sentences from 2013 KBP corpus. Test data consists of 14k mannually labeled sentences from 2013 KBP slot filling assessment results. It has 19 relation types and 126 entity types. (Download JSON)
Please put the data files in corresponding subdirectories under CoType/data/source
We have included compilied binaries. If you need to re-compile retype.cpp
under your own g++ environment
$ cd CoType/code/Model/retype; make
Run CoType for the task of Relation Extraction on the Wiki-KBP dataset
$ java -mx4g -cp "code/DataProcessor/stanford-corenlp-full-2016-10-31/*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer
$ ./run.sh
Dataset to run on.
Data="KBP"
- Parameters for learning CoType embeddings:
- KBP: -negative 3 -iters 400 -lr 0.02 -transWeight 1.0
- NYT: -negative 5 -iters 700 -lr 0.02 -transWeight 7.0
- BioInfer: -negative 5 -iters 700 -lr 0.02 -transWeight 7.0
After learning the embedding vectors, following script evaluates relation extraction performance (precision, recall, F1).
$ python code/Evaluation/emb_test.py extract KBP retype cosine 0.0
$ python code/Evaluation/tune_threshold.py extract KBP emb retype cosine
Please cite the following paper if you find the codes and datasets useful:
@inproceedings{ren2017cotype,
author = {Ren, Xiang and Wu, Zeqiu and He, Wenqi and Qu, Meng and Voss, Clare R. and Ji, Heng and Abdelzaher, Tarek F. and Han, Jiawei},
title = {CoType: Joint Extraction of Typed Entities and Relations with Knowledge Bases},
booktitle = {Proceedings of the 26th International Conference on World Wide Web},
year = {2017},
pages = {1015--1024},
}