GithubHelp home page GithubHelp logo

fskeo / graphq_ir Goto Github PK

View Code? Open in Web Editor NEW

This project forked from flitternie/graphq_ir

0.0 0.0 0.0 4.69 MB

A Unified Intermediate Representation for Graph Query Languages

Shell 0.58% Python 99.42%

graphq_ir's Introduction

GraphQ IR: Unified Intermediate Representation for the Semantic Parsing of Graph Query Languages

This repository contains source code for the EMNLP 2022 Long Paper "GraphQ IR: Unifying the Semantic Parsing of Graph Query Languages with One Intermediate Representation".

Figure

Setup

General Setup

All required packages and versions can be found in the environment configuration file environment.yml, or you may simply build an identical conda environment like this:

conda env create -f environment.yml
conda activate graphqir

As for our implemented source-to-source compiler, please refer to GraphQ Trans for its setup and usage.

Model Setup

In our experiments, we used the BART-base pretrained model for implementing the neural semantic parser. To reproduce, you may download the model checkpoint here.

Dataset-Specific Setup

For KQA Pro, you may follow their documentation to set up the database backend. After starting the service, replace the url in data/kqapro/utils/sparql_engine.py with your own.

For GrailQA, you may follow Freebase Setup to set up the database backend. After starting the service, replace the url in data/grailqa/utils/sparql_executer.py with your own.

Dataset

Experiments are conducted on 4 semantic parsing benchmarks KQA Pro, Overnight, GrailQA and MetaQA-Cypher.

KQA Pro

This dataset contains the parallel data of natural language questions and the corresponding logical forms in SPARQL and KoPL. It can be downloaded via the official website as provided by Cao et al. (2022).

Overnight

This dataset contains the parallel data of natural language questions and the corresponding logical forms in Lambda-DCS in 8 sub-domains as prepared by Wang et al. (2015). The data and the evaluator can be accessed here as provided by Cao et al. (2019).

To pull the dependencies for running the Overnight experiments, please run:

./pull_dependency_overnight.sh

GrailQA

This dataset contains the parallel data of natural language questions and the corresponding logical forms in SPARQL. It can be downloaded via the offical website as provided by Gu et al. (2020). To focus on the sole task of semantic parsing, we replace the entity IDs (e.g. m.06mn7) with their respective names (e.g. Stanley Kubrick) in the logical forms, thus eliminating the need for an explicit entity linking module.

Please note that such replacement can cause inequivalent execution results. Thus, the performance reported in our paper may not be directly comparable to the other works.

To pull the dependencies for running the GrailQA experiments, please run:

./pull_dependency_grailqa.sh

MetaQA-Cypher

This dataset contains the parallel data of natural language questions and the corresponding logical forms in Cypher. The original data is available here as prepared by Zhang et al. (2017). We used a rule-based method to create its Cypher annotation for low-resource evaluation.

To pull the dependencies for running the MetaQA-Cypher experiments, please run:

./pull_dependency_metaqa.sh

Throughout the experiments, we suggest to structure the files as follows:

GraphQ_IR/
└── data/
    ├── kqapro/
    │   ├── data/
    │   │   ├── kb.json
    │   │   ├── train.json
    │   │   ├── val.json
    │   │   └── test.json
    │   ├── utils/
    │   ├── config_kopl.py
    │   ├── config_sparql.py
    │   └── evaluate.py
    ├── overnight/
    │   ├── data/
    │   │   ├── *_train.tsv
    │   │   └── *_test.tsv
    │   ├── evaluator/
    │   └── config.py
    ├── grailqa/
    │   ├── data/
    │   │   ├── ontology/
    │   │   ├── train.json
    │   │   ├── val.json
    │   │   └── test.json
    │   ├── utils/
    │   └── config.py
    ├── metaqa/
    │   ├── data/
    │   │   └── *shot/
    │   │       ├── train.json
    │   │       ├── val.json
    │   │       └── test.json
    │   └── config.py
    ├── bart-base/
    ├── utils/
    ├── preprocess.py
    ├── train.py
    ├── inference.py
    ├── corrector.py
    ├── cfq_ir.py
    └── module-classes.txt

Experiments

To simplify, here we take the NL-to-SPARQL semantic parsing task over the KQA Pro dataset as an example.

For other datasets or different target languages, you may simply modify the arguments --input_dir, --output_dir , and --config accordingly.

For running the BART baseline experiments, remove the argument --ir_mode.

For running the experiments with CFQ IR (Herzig et al., 2021), set --ir_mode to cfq. Please note that CFQ IR is only applicable to those datasets with SPARQL as the target query language (i.e., KQA Pro, GrailQA) .

Preprocessing

python -m preprocess \
--input_dir ./data/kqapro/data/ \ # path to raw data
--output_dir ./exp_files/kqapro/ \	# path for saving preprocessed data
--model_name_or_path ./bart-base/ \	# path to pretrained model
--config ./data/kqapro/config_sparql.py \ # path to data-specific configuration file
--ir_mode graphq # or "cfq" for CFQ IR / removed for running baseline 

Training

python -m torch.distributed.launch --nproc_per_node=8 -m train \ 
--input_dir ./exp_files/kqapro/ \ # path to preprocessed data
--output_dir ./exp_results/kqapro/ \ # path for saving experiment logs & checkpoints
--model_name_or_path ./bart-base/ \	# path to pretrained model
--config ./data/kqapro/config_sparql.py \ # path to data-specific configuration file
--batch_size 128 \ # 128 for KQA Pro; 64 for Overnight & GrailQA & MetaQA-Cypher 
--ir_mode graphq # or "cfq" for CFQ IR / removed for running baseline 

Inference

python -m inference \
--input_dir ./exp_files/kqapro/ \ # path to preprocessed data
--output_dir ./exp_results/kqapro/ \ # path for saving inference results
--model_name_or_path ./bart-base/ \ # path to pretrained model
--ckpt ./exp_results/kqapro/checkpoint-best/ \ # path to saved checkpoint
--config ./data/kqapro/config_sparql.py \ # path to data-specific configuration file
--ir_mode graphq # or "cfq" for CFQ IR / removed for running baseline 

Citation

If you find our work helpful, please cite it as follows:

@article{nie2022graphq,
  title={GraphQ IR: Unifying Semantic Parsing of Graph Query Language with Intermediate Representation},
  author={Nie, Lunyiu and Cao, Shulin and Shi, Jiaxin and Tian, Qi and Hou, Lei and Li, Juanzi and Zhai, Jidong},
  journal={arXiv preprint arXiv:2205.12078},
  year={2022}
}

graphq_ir's People

Contributors

flitternie avatar shijx12 avatar shulincao avatar jiudingsun01 avatar passenger20 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.