GithubHelp home page GithubHelp logo

inpars's Introduction

InPars

Inquisitive Parrots for Search
A toolkit for end-to-end synthetic data generation using LLMs for IR

Installation

Use pip package manager to install InPars toolkit.

pip install inpars

Usage

To generate data for one of the BEIR datasets, you can use the following command:

python -m inpars.generate \
        --prompt="inpars" \
        --dataset="trec-covid" \
        --dataset_source="ir_datasets" \
        --base_model="EleutherAI/gpt-j-6B" \
        --output="trec-covid-queries.jsonl" 

Additionally, you can use your own custom dataset by specifying the corpus and queries arguments to local files.

These generated queries might be noisy, thus a filtering step is highly recommended:

python -m inpars.filter \
        --input="trec-covid-queries.jsonl" \
        --dataset="trec-covid" \
        --filter_strategy="scores" \
        --keep_top_k="10_000" \
        --output="trec-covid-queries-filtered.jsonl"

There are currently two filtering strategies available: scores, which uses probability scores from the LLM itself, and reranker, which uses an auxiliary reranker to filter queries as introduced by InPars-v2.

To prepare the training file, negative examples are mined by retrieving candidate documents with BM25 using the generated queries and sampling from these candidates. This is done using the following command:

python -m inpars.generate_triples \
        --input="trec-covid-queries-filtered.jsonl" \
        --dataset="trec-covid" \
        --output="trec-covid-triples.tsv"

With the generated triples file, you can train the model using the following command:

python -m inpars.train \
        --triples="trec-covid-triples.tsv" \
        --base_model="castorini/monot5-3b-msmarco-10k" \
        --output_dir="./reranker/" \
        --max_steps="156"

You can choose different base models, hyperparameters, and training strategies that are supported by HuggingFace Trainer.

After finetuning the reranker, you can rerank prebuilt runs from the BEIR benchmark or specify a custom run using the following command:

python -m inpars.rerank \
        --model="./reranker/" \
        --dataset="trec-covid" \
        --output_run="trec-covid-run.txt"

Finally, you can evaluate the reranked run using the following command:

python -m inpars.evaluate \
        --dataset="trec-covid" \
        --run="trec-covid-run.txt"

Resources

Generated datasets

InPars-v1

Download using the links below the synthetic datasets generated by InPars-v1. Each dataset contains 100k synthetic queries paired with the document/passage that originated the query. To use them for training, you still need to filter the top 10k query-doc pairs by scores using the command inpars.filter explained above.

InPars-v2

Download the synthetic datasets generated by InPars-v2 on HuggingFace Hub.

Each dataset contains 10k pairs of <synthetic query, document> already filtered by monoT5-3B. I.e. we select the top 10k pairs according to monoT5-3b from the 100k examples generated by InPars-v1. You can then use these 10k examples as positive query-document pairs to train retrievers using the command inpars.train explain above. Remember that you still need to generate negative examples to train the models using the command inpars.generate_triples explained above. Find more details about the training process in the InPars-v2 paper.

Finetuned models

Download finetuned models from InPars-v2 on HuggingFace Hub.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

References

Currently, if you use this tool you can cite the original InPars paper published at SIGIR, InPars-v2 or the InPars Toolkit paper.

InPars-v1:

@inproceedings{inpars,
  author = {Bonifacio, Luiz and Abonizio, Hugo and Fadaee, Marzieh and Nogueira, Rodrigo},
  title = {{InPars}: Unsupervised Dataset Generation for Information Retrieval},
  year = {2022},
  isbn = {9781450387323},
  publisher = {Association for Computing Machinery},
  address = {New York, NY, USA},
  url = {https://doi.org/10.1145/3477495.3531863},
  doi = {10.1145/3477495.3531863},
  booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval},
  pages = {2387–2392},
  numpages = {6},
  keywords = {generative models, large language models, question generation, synthetic datasets, few-shot models, multi-stage ranking},
  location = {Madrid, Spain},
  series = {SIGIR '22}
}

InPars-v2:

@misc{inparsv2,
  doi = {10.48550/ARXIV.2301.01820},
  url = {https://arxiv.org/abs/2301.01820},
  author = {Jeronymo, Vitor and Bonifacio, Luiz and Abonizio, Hugo and Fadaee, Marzieh and Lotufo, Roberto and Zavrel, Jakub and Nogueira, Rodrigo},
  title = {{InPars-v2}: Large Language Models as Efficient Dataset Generators for Information Retrieval},
  publisher = {arXiv},
  year = {2023},
  copyright = {Creative Commons Attribution 4.0 International}
}

InPars Toolkit:

@misc{abonizio2023inpars,
      title={InPars Toolkit: A Unified and Reproducible Synthetic Data Generation Pipeline for Neural Information Retrieval}, 
      author={Hugo Abonizio and Luiz Bonifacio and Vitor Jeronymo and Roberto Lotufo and Jakub Zavrel and Rodrigo Nogueira},
      year={2023},
      eprint={2307.04601},
      archivePrefix={arXiv},
      primaryClass={cs.IR}
}

inpars's People

Contributors

cakiki avatar din0s avatar hugoabonizio avatar lhbonifacio avatar marziehf avatar rodrigonogueira4 avatar vjeronymo2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

inpars's Issues

Permission 'storage.objects.list' denied on resource

Hi! I am getting the error AccessDeniedException: 403 xxx does not have storage.objects.list access to the Google Cloud Storage bucket. Permission 'storage.objects.list' denied on resource (or it may not exist). when I try to download the data using gsutil cp gs://nm_datasets/trec_covid/runs/run.beir-v1.0.0-trec-covid-flat.trec $DATA. Can you give us the access/any ideas on how to fix that? Thanks

Question about Language Model G training

Hi,
this is a great work and I would like to know if the gpt used in section~4.2 needs to be fine-tuning on MARCO, or generates the query-doc pairs with prompt directly.

Conflicting dependencies on package versions

Hello! I found some conflicting dependencies on package versions when installing requirements.txt.

-The conflict is caused by:
-The user requested pygaggle 0.0.3.1 (from git+https://github.com/castorini/pygaggle.git)
-The user requested pygaggle 0.0.3.1 (from git+https://github.com/castorini/pygaggle/)
- The conflict is caused by:
- The user requested pyserini==0.14.0
- pygaggle 0.0.3.1 depends on pyserini>=0.16.0

Can you make some updates on it? Thanks a lot.

Reproducing results from papers

Hi there! Great work - this is a very interesting line of research!

I was hoping to replicate your results on BEIR but seem to be having some trouble. For example, in both InPars v1 and v2 papers you mention using a learning rate of 1e−3, but I can't find any example scripts that use that (in legacy or otherwise, they seem to use 3e-4). When I use the hyperparameters from the papers (or the default example), I am getting much worse results.

I'm sure it's just some config that I'm missing from reading the papers/code, but if you happen to have the commands that reproduce the numbers in the paper I'd really appreciate it!

Thanks for your time!

Model Missing

Hi everyone!
I was trying to reproduce the results using your models but I have noticed that on the Hugging Face Hub the model corresponding to zeta-alpha-ai/monot5-3b-inpars-v2-cqadupstack-gaming is missing: in the folder there is only a .gitattributes file.

Can you upload the model or tell me where I can retrieve it? Thanks in advance

Missing datasets library from the requirements.txt

I tried to train a model on my data, however, I received the following error
Traceback (most recent call last): File "/opt/conda/envs/cross/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/envs/cross/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/opt/conda/envs/cross/lib/python3.10/site-packages/inpars/train.py", line 2, in <module> from datasets import load_dataset ModuleNotFoundError: No module named 'datasets'

I fixed it by installing by myself the datasets library, but it would be great if it is installed when setting up the inpars library.

Could you please provide some examples?

Hello, when I run the section "How to Generate" there need to add the parameter "bad_good, good_bad". Besides, the parameter "--collection" , can you give me some examples? Thanks you!
image

Do you recommend any expansion of BM25 query?

When I read the paper I didn't see anything about modifying the BM25 query that pulls the initial 1000 documents before running the LLM on it. I was curious if you recommend any standard modification to it (adding synonyms, etc) before pulling the initial documents.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.