GithubHelp home page GithubHelp logo

lhrlab / hahe Goto Github PK

View Code? Open in Web Editor NEW
11.0 2.0 3.0 21.33 MB

[ACL 2023] Official resources of "HAHE: Hierarchical Attention for Hyper-Relational Knowledge Graphs in Global and Local Level".

Home Page: https://doi.org/10.18653/v1/2023.acl-long.450

License: MIT License

Shell 1.02% Python 98.98%
global-local hierarchical-attention hyper-relational-knowledge-graph hypergraph-neural-networks link-prediction transformer

hahe's Introduction

HAHE

Official resources of "HAHE: Hierarchical Attention for Hyper-Relational Knowledge Graphs in Global and Local Level". Haoran Luo, Haihong E, Yuhao Yang, Yikai Guo, Mingzhi Sun, Tianyu Yao, Zichen Tang, Kaiyang Wan, Meina Song, Wei Lin. ACL 2023 [paper].

Overview

The Global-level Hypergraph-based representation and Local-level Sequence-based representation based on three examples of H-Facts in HKGs:

The overview of HAHE model for Global-level and Local-level Representation of HKGs:

Introduction

This is the Pytorch implementation of HAHE, a novel Hierarchical Attention model for HKG Embedding, including global-level and local-level attentions.

This repository contains the code and data, as well as the optimal configurations to reproduce the reported results.

Requirements and Installation

This project should work fine with the following environments:

  • Python 3.9.16 for data preprocessing, training and evaluation with:
    • torch 1.10.0
    • torch-scatter 2.0.9
    • torch-sparse 0.6.13
    • torch-cluster 1.6.0
    • torch-geometric 2.1.0.post1
    • numpy 1.23.3
  • GPU with CUDA 11.3

All the experiments are conducted on a single 11G NVIDIA GeForce 1080Ti.

Setup with Conda

bash env.sh

How to Run

Step 1. Download raw data

We consider three representative n-ary relational datasets, and the datasets can be downloaded from:

Step 2. Preprocess data

Then we convert the raw data into the required format for training and evaluation. The new data is organized into a directory named data, with a sub-directory for each dataset. In general, a sub-directory contains:

  • train.json: train set
  • valid.json: dev set
  • train+valid.json: train set + dev set
  • test.json: test set
  • all.json: combination of train/dev/test sets, used only for filtered evaluation
  • vocab.txt: vocabulary consisting of entities, relations, and special tokens like [MASK] and [PAD]

Note: JF17K is the only one that provides no dev set.

Step 3. Training & Evaluation

To train and evaluate the HAHE model, please run:

python -u ./src/run.py --name [TEST_NAME] --device [GPU_ID] -vocab_size [VOCAB_SIZE] --vocab_file [VOCAB_FILE] \
                       --train_file [TRAIN_FILE] --test_file [TEST_FILE] --ground_truth_file [GROUND_TRUTH_FILE] \
                       --num_workers [NUM_WORKERS] --num_relations [NUM_RELATIONS] \
                       --max_seq_len [MAX_SEQ_LEN] --max_arity [MAX_ARITY]

Here you should first create two directories to store the parameters and results of HAHE respectively, then you can set parameters of one dataset according to its statisitcs. [TEST_NAME] is the unique name identifying one Training & Evaluation, [GPU_ID] is the GPU ID you want to use. [VOCAB_SIZE] is the number of vocab of the dataset. [VOCAB_FILE] & [TRAIN_FILE] & [TEST_FILE] & [GROUND_TRUTH_FILE] are the paths storing the vocab file("vocab.txt"), train file("train.json"), test file("test.json") and ground truth file("all.json"). [NUM_WORKERS] is the number of workers when reading the data. [NUM_RELATIONS] is the number of relations of the dataset. [MAX_ARITY] is the maximum arity of N-arys in the datast, [MAX_SEQ_LEN] is the maximum length of N-ary sequences, which is equal to (2 * [MAX_ARITY] - 1).

Please modify those hyperparametes according to your needs and characteristics of different datasets.

For JF17K, to train and evalute on this dataset using default hyperparametes, please run:

python -u ./src/run.py --dataset "jf17k" --device "0" --vocab_size 29148 --vocab_file "./data/jf17k/vocab.txt" --train_file "./data/jf17k/train.json" --test_file "./data/jf17k/test.json" --ground_truth_file "./data/jf17k/all.json" --num_workers 1 --num_relations 501 --max_seq_len 11 --max_arity 6 --hidden_dim 256 --global_layers 2 --global_dropout 0.9 --global_activation "elu" --global_heads 4 --local_layers 12 --local_dropout 0.35 --local_heads 4 --decoder_activation "gelu" --batch_size 1024 --lr 5e-4 --weight_deca 0.002 --entity_soft 0.9 --relation_soft 0.9 --hyperedge_dropout 0.85 --epoch 300 --warmup_proportion 0.05

For Wikipeople, to train and evalute on this dataset using default hyperparametes, please run:

python -u ./src/run.py --dataset "wikipeople" --device "0" --vocab_size 35005 --vocab_file "./data/wikipeople/vocab.txt" --train_file "./data/wikipeople/train+valid.json" --test_file "./data/wikipeople/test.json" --ground_truth_file "./data/wikipeople/all.json" --num_workers 1 --num_relations 178 --max_seq_len 13 --max_arity 7 --hidden_dim 256 --global_layers 2 --global_dropout 0.1 --global_activation "elu" --global_heads 4 --local_layers 12 --local_dropout 0.1 --local_heads 4 --decoder_activation "gelu" --batch_size 1024 --lr 5e-4 --weight_deca 0.01 --entity_soft 0.2 --relation_soft 0.1 --hyperedge_dropout 0.99 --epoch 300 --warmup_proportion 0.1

For WD50K, to train and evalute on this dataset using default hyperparametes, please run:

python -u ./src/run.py --dataset "wd50k" --device "0" --vocab_size 47688 --vocab_file "./data/wd50k/vocab.txt" --train_file "./data/wd50k/train+valid.json" --test_file "./data/wd50k/test.json" --ground_truth_file "./data/wd50k/all.json" --num_workers 1 --num_relations 531 --max_seq_len 19 --max_arity 10 --hidden_dim 256 --global_layers 2 --global_dropout 0.1 --global_activation "elu" --global_heads 4 --local_layers 12 --local_dropout 0.1 --local_heads 4 --decoder_activation "gelu" --batch_size 512 --lr 5e-4 --weight_deca 0.01 --entity_soft 0.2 --relation_soft 0.1 --hyperedge_dropout 0.8 --epoch 300 --warmup_proportion 0.1

BibTex

If you find this work is helpful for your research, please cite:

@inproceedings{luo2023hahe,
    title = "{HAHE}: Hierarchical Attention for Hyper-Relational Knowledge Graphs in Global and Local Level",
    author = "Luo, Haoran  and
      E, Haihong  and
      Yang, Yuhao  and
      Guo, Yikai  and
      Sun, Mingzhi  and
      Yao, Tianyu  and
      Tang, Zichen  and
      Wan, Kaiyang  and
      Song, Meina  and
      Lin, Wei",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-long.450",
    doi = "10.18653/v1/2023.acl-long.450",
    pages = "8095--8107",
    abstract = "Link Prediction on Hyper-relational Knowledge Graphs (HKG) is a worthwhile endeavor. HKG consists of hyper-relational facts (H-Facts), composed of a main triple and several auxiliary attribute-value qualifiers, which can effectively represent factually comprehensive information. The internal structure of HKG can be represented as a hypergraph-based representation globally and a semantic sequence-based representation locally. However, existing research seldom simultaneously models the graphical and sequential structure of HKGs, limiting HKGs{'} representation. To overcome this limitation, we propose a novel Hierarchical Attention model for HKG Embedding (HAHE), including global-level and local-level attention. The global-level attention can model the graphical structure of HKG using hypergraph dual-attention layers, while the local-level attention can learn the sequential structure inside H-Facts via heterogeneous self-attention layers. Experiment results indicate that HAHE achieves state-of-the-art performance in link prediction tasks on HKG standard datasets. In addition, HAHE addresses the issue of HKG multi-position prediction for the first time, increasing the applicability of the HKG link prediction task. Our code is publicly available.",
}

For further questions, please contact: [email protected].

hahe's People

Contributors

lhrlab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

hahe's Issues

hyperedge_dropout

你好,hyperedge_dropout只取少数超边参与全局信息编码,请问为什么设计hyperedge_dropout?

Can not understand the code

edge_labels.append([0, 1, 2] + [3, 4] * max_aux)
edge_labels.append([1, 0, 5] + [6, 7] * max_aux)
edge_labels.append([2, 5, 0] + [8, 9] * max_aux)
for idx in range(max_aux):
edge_labels.append([3, 6, 8] + [11, 12] * idx + [0, 10] + [11, 12] * (max_aux - idx - 1))
edge_labels.append([4, 7, 9] + [12, 13] * idx + [10, 0] + [12, 13] * (max_aux - idx - 1))

i can not understand the how to get the edge_labels,and what the edge_labels mean

Unable to reproduce results on JF17K

Hello! I use the default parameters in Readme.md to run experiments on JF17K, but the final result of entity MRR has a gap with your paper, could you please tell me the reason?
The parameters are:
)KQ1_(EYXC%5 9Q_T$ )SBI
The results are:
image

About the results of JF17K dataset

Would you mind providing the running script for JF17K? According to the default parameters you provided, I can only get 10% MRR results.

export LOG_PATH=./logs/jf17k.out
export CUDA=4
export VOCAB_SIZE=29148
export DATASET=jf17k
export VOCAB_FILE=./data/jf17k/vocab.txt
export TRAIN_FILE=./data/jf17k/train.json
export TEST_FILE=./data/jf17k/test.json
export GROUND_TRUTH_FILE=./data/jf17k/all.json
export NUM_RELATIONS=321


nohup python -u ./src/run.py --name "TEST" --dataset $DATASET --device $CUDA --vocab_size $VOCAB_SIZE --vocab_file $VOCAB_FILE \
--train_file $TRAIN_FILE --test_file $TEST_FILE --ground_truth_file $GROUND_TRUTH_FILE --num_workers 1 --num_relations $NUM_RELATIONS \
> $LOG_PATH 2>&1 &

The corresponding log file config:

05/14/2023 22:50:42  -----------  Configuration Arguments -----------
05/14/2023 22:50:42  batch_size: 512
05/14/2023 22:50:42  ckpt_save_dir: ckpts
05/14/2023 22:50:42  dataset: jf17k
05/14/2023 22:50:42  decoder_activation: gelu
05/14/2023 22:50:42  device: 4
05/14/2023 22:50:42  entity_soft: 0.8
05/14/2023 22:50:42  epoch: 100
05/14/2023 22:50:42  global_activation: elu
05/14/2023 22:50:42  global_dropout: 0.1
05/14/2023 22:50:42  global_heads: 4
05/14/2023 22:50:42  global_layers: 2
05/14/2023 22:50:42  ground_truth_file: ./data/jf17k/all.json
05/14/2023 22:50:42  hidden_dim: 256
05/14/2023 22:50:42  hyperedge_dropout: 0.0
05/14/2023 22:50:42  local_dropout: 0.1
05/14/2023 22:50:42  local_heads: 4
05/14/2023 22:50:42  local_layers: 12
05/14/2023 22:50:42  lr: 0.0005
05/14/2023 22:50:42  max_arity: 32
05/14/2023 22:50:42  max_seq_len: 63
05/14/2023 22:50:42  name: TEST
05/14/2023 22:50:42  num_entities: 28825
05/14/2023 22:50:42  num_relations: 321
05/14/2023 22:50:42  num_workers: 1
05/14/2023 22:50:42  relation_soft: 0.9
05/14/2023 22:50:42  remove_mask: False
05/14/2023 22:50:42  test_file: ./data/jf17k/test.json
05/14/2023 22:50:42  train_file: ./data/jf17k/train.json
05/14/2023 22:50:42  use_edge: True
05/14/2023 22:50:42  use_global: True
05/14/2023 22:50:42  use_node: False
05/14/2023 22:50:42  vocab_file: ./data/jf17k/vocab.txt
05/14/2023 22:50:42  vocab_size: 29148
05/14/2023 22:50:42  warmup_proportion: 0.1
05/14/2023 22:50:42  weight_decay: 0.01

Segmentation fault (core dumped)

Hi, I follow the env.sh install the environment, and run the WD50K example. But it turn to Segmentation fault (core dumped). Can you help me? It seems that "from torch_geometric.nn import GATv2Conv" has a problem. Thanks.
image
image
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.