This is the official code for Real-Time Open-Domain Question Answering with Dense-Sparse Phrase Index, to appear at ACL 2019. Check out our Live Demo.
BibTeX:
@inproceedings{denspi,
title={Real-Time Open-Domain Question Answering with Dense-Sparse Phrase Index},
author={Seo, Minjoon and Lee, Jinhyuk and Kwiatkowski, Tom and Parikh, Ankur P and Farhadi, Ali and Hajishirzi, Hannaneh},
booktitle={ACL},
year={2019}
}
While the entire codebase is here, please understand that it still requires substantial work on documentation. As of now, we only have instructions for hosting your own demo with the pre-dumped index and pre-trained model that we provide. Please stay tuned for the full documentation including how to start from scratch (though you are more than welcome to look into our undocumented code).
This section will let you host the demo that looks like
on your machine. You can also try it out here. You will need to download ~1.5 TB of files, but once you have them, it will take less than a minute to start serving.
- CPUs: at least 4 cores recommended.
- RAM: at least 32GB needed.
- Storage: at least 2TB of SSD needed.
- GPUs: not needed.
If you are using Google Cloud (our demo is also being hosted on Google Cloud, with 24 vCPUs, 128 GB RAM, and 6 local SSDs), we highly recommend using local SSD, which is not only cheaper but also better for low-latency applications (at the cost of persistency).
We highly recommend Conda environment, since faiss
cannot be installed with pip.
Note that we have two requirements.txt
files: one in this directory, and one in open
subfolder.
This directory's file is for hosting a (PyTorch-based) server that maps the input question to a vector.
open
's file is for hosting the search server and the demo itself.
In this tutorial, we will simply install both in the same environment.
- Make sure you are using
python=3.6
through Conda. - First, manually install
faiss
withconda
:
conda install faiss-cpu=1.5.2 -c pytorch
- Before installing with pip, make sure that you have installed
DrQA
. Visit here for instructions. - Then install both requirement files:
pip install -r requirements.txt
pip install -r open/requirements.txt
Note that this will give you an error if you don't have faiss
and DrQA
already installed.
Model and dump files are currently provided through Google Cloud Storage under bucket denspi
,
so first make sure that you have installed gsutil
(link).
You will then need to download four directories.
- Create
$ROOT_DIR
and cd to it:
mkdir $ROOT_DIR; cd $ROOT_DIR
- You will need the model files.
gsutil cp -r gs://denspi/v1-0/model .
- You will need BERT-related files.
gsutil cp -r gs://denspi/v1-0/bert .
- You will need tfidf-related information from DrQA.
gsutil cp -r gs://denspi/v1-0/wikipedia .
- You will need to download the entire phrase index dump. Warning: this will take up 1.5 TB!
gsutil cp -r gs://denspi/v1-0/dump .
You can also choose to download all at once via
gsutil cp -r gs://denspi/v1-0 $ROOT_DIR
Serve API on port $API_PORT
:
python run_piqa.py --do_serve --load_dir $ROOT_DIR/model --metadata_dir $ROOT_DIR/bert --do_load --parallel --port $API_PORT
This lets you to perform GET request on $API_PORT
to obtain the embedding of the question in json (list) format.
Serve the demo on $DEMO_PORT
:
cd open/
python run_demo.py $ROOT_DIR/dump $ROOT_DIR/wikipedia --api_port $API_PORT --port $DEMO_PORT
Demo will be served in ~1 minute.
Our code makes a heavy use of faiss, DrQA and BERT, in particular, Huggingface's PyTorch implementation. We thank them for open-sourcing these projects!