Comments (3)
I've uploaded the script to run baseline LMs!
https://github.com/AkariAsai/self-rag/blob/main/retrieval_lm/run_baseline_lm.py
I'll add documentations to run baselines, but essentially, you just need to specify the model name, and pass the same input file as in the Self-RAG pipeline. For retrieval baseline, please use --mode retrieval --prompt_name "prompt_no_input_retrieval"
option to trigger retrieval.
e.g., Llama2-7b (pre-trained)
python run_baseline_refactor.py \
--model_name meta-llama/Llama-2-7b-hf \
--input_file INPUT_FILE_SAME_AS_SELF_RAG \
--max_new_tokens 100 --metric match \
--result_fp RESULT_FILE_PATH --task qa --mode retrieval --prompt_name "prompt_no_input_retrieval"
e.g., ChatGT (March)
python run_baseline_refactor.py \
--model_name gpt-3.5-turbo-0301 \
--input_file INPUT_FILE_SAME_AS_SELF_RAG \
--max_new_tokens 100 --metric match \
--result_fp RESULT_FILE_PATH \
--task qa \
--api_key YOUR_OPEN_AI_API_KEY_FILE \
--mode retrieval --prompt_name "prompt_no_input_retrieval"
For OpenAI API models, you also need to set organization key here: https://github.com/AkariAsai/self-rag/blob/main/retrieval_lm/run_baseline_lm.py#L12
from self-rag.
I close this issue now but let me know if you have any further questions!
from self-rag.
Thank you :)
from self-rag.
Related Issues (20)
- Where does the retrieval done?
- Questions about Critic model HOT 2
- Retrieval-augmented baselines - Huggingface models HOT 4
- I have create a virtual enviroment in anaconda. However, something went wrong when i try to 'pip install -r requirement' HOT 2
- 4 bit quantized version of 7B?
- How long does it takes to train an epoch for critic/generator model on llama-7B with 8 A100?
- What does YOUR_INPUT_FILE look like? Can you provide an example? Thanks very much! HOT 1
- Explanation needed for [Continue to Use Evidence] HOT 1
- How can I get initial input file for generator?
- model issues
- Processed Input Dataset and Flan-3B Critic Generated Dataset
- Reproducing Self-RAG
- accuracy metric HOT 3
- About parameter `max_depth` HOT 2
- Doesn't the generator need to call the retriever when training the model?
- The critic model will generate different type of token when I use run_reward_vllm.py to generate tokens HOT 1
- some problem with run_long_form_static.py
- Data formatting to call the retriever
- Question Regarding Formula Error in Your Paper
- FactScore Inference Fails with KeyError: 'original_splitted_sentences'
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from self-rag.