Comments (14)
Thanks @lintangsutawika . @zaidalyafeai actually made HFDS variants of the tasks that include prior dialog turns as context. I think we can just use that as the base dataset.
from promptsource.
There are a few points to note for some datasets:
- Natural Questions - The original eval scripts measures text span while T0 generates an answer. We would need to agree on how to adapt generated text to span or alternatively adapt the ground truth span to text string. I think the latter would be better.
- CoQA - The evaluation set has 7983 samples that consist of 500 context samples with multiple questions each. The evaluation is done on all the questions while the HF version that is given to the model only has 500 samples. We might need to reformat the HF dataset so that each sample consist of context+question. Currently, all the questions for one context is made into a list.
- Quac - Same condition as CoQA
from promptsource.
- Yes. For all tasks we ignore numerical span indices and always measure based on text.
- I will take a look at how other papers did this and revisit this.
from promptsource.
For TriviaQA, NQ, and WebQuestions, we are evaluating on the open domain variant of the task and should use the evaluation procedure shown here: https://github.com/google-research/google-research/tree/master/t5_closed_book_qa
from promptsource.
Posting a colab that calls the special eval scripts for the above datasets.
https://colab.research.google.com/drive/1G2zxbvi96qxbOv6LNYvTTrcJxsBX_4Hr
Edit:
- Updated with Squad V2 to exclude non-answerable questions
- Normalized word number to word symbol for Drop
from promptsource.
Dataset splits for open-domain QA are a total mess, so writing this down here to track my own records:
- Natural Questions: In H.1, the GPT-3 paper says it is reporting results on the "test" split. However, standard open-domain and closed-book QA practice here is to use the validation set as the test set. I'm guessing that's what they mean but I'm waiting for confirmation.
- TriviaQA: GPT-3 reports results in the main paper on the test server; if they were following standard practice this is the "full-em" score in the Wikipedia tab on this leaderboard: https://competitions.codalab.org/competitions/17208#results In order to report comparable results, we need to 1) copy the templates we have for trivia_qa/rc to trivia_qa/unfiltered, 2) run inference on the test set to get predictions, 3) submit the predictions to the test server.
- WebQuestions: This one is simple, there is just a test set.
from promptsource.
Leo Gao mentions that
for ARC, OpenbookQA, and RACE in particular openai claims that a different kind of normalization described in the paper works really well (they don't really provide any evidence or explanation, they just claim it is and use it for just these 3 tasks)
We aren't doing any kind of length normalization, so if we are underperforming on those tasks, we could consider it.
from promptsource.
For Drop, I noticed that the model would predict a number by its word instead of its numeric symbol. So I added that to the normalization process.
Predictions from drop_can_you_tell_me_1112200_predictions
from finetune-t5-xxl-lm-d4-091621-512
Before:
Exact-match accuracy 26.51
F1 score 30.32
26.51 & 30.32
----
date: 152 (1.59%)
Exact-match accuracy 64.474
F1 score 71.829
number: 5826 (61.09%)
Exact-match accuracy 8.393
F1 score 8.937
span: 3069 (32.18%)
Exact-match accuracy 63.245
F1 score 69.709
spans: 489 (5.13%)
Exact-match accuracy 0.000
F1 score 24.871
After:
Exact-match accuracy 31.50
F1 score 35.29
31.50 & 35.29
----
date: 152 (1.59%)
Exact-match accuracy 64.474
F1 score 71.829
number: 5837 (61.21%)
Exact-match accuracy 16.567
F1 score 17.110
span: 3060 (32.09%)
Exact-match accuracy 63.366
F1 score 69.805
spans: 487 (5.11%)
Exact-match accuracy 0.000
F1 score 24.903
According to GPT3 Paper for Zero-shot
Name : DROP
Metric: f1
Split : dev
Small : 9.40
Med : 13.6
Large : 14.4
XL : 16.4
2.7B : 19.7
6.7B : 17.0
13B : 24.0
175B : 23.6
So we are performing significantly better
from promptsource.
For the record, each example in coqa and quac is actually N examples, where N is the number of turns of the dialog. Our prompts for coqa and quac will only evaluate on one turn per example. We need to create new serialized versions of the datasets if we are going to evaluate on the full dataset.
from promptsource.
Natural Questions: In H.1, the GPT-3 paper says it is reporting results on the "test" split. However, standard open-domain and closed-book QA practice here is to use the validation set as the test set. I'm guessing that's what they mean but I'm waiting for confirmation.
Got confirmation that this is indeed the case. Unfortunately, this split is not available in HF, apart from in the main nq dataset which is utterly colossal. The one in the nq_open dataset (and the one in kilt nq) is different.
from promptsource.
@craffel I've actually made prompts for coqa that tries to solve this.
{% set n=25 %}
{% if questions|length > n %}
{{story}}
Q: {{questions[0]}}
{% for i in range(0,n) %}
A: {{answers['input_text'][i]}}
Q: {{questions[i+1]}}
{% endfor %}
A:
|||
{{answers['input_text'][n]}}
{% else %}
Placeholder, Do Not Process
|||
Placeholder, Do Not Process
{% endif %}
But the downside is that it has to be a unique prompt for each number of turns. For coqa the maximum number of turns is 25 so there needs to be 25 unique prompts. The idea is to then collect the prediction to a json and run the official eval script.
So far I've made around 15 unique prompts (just change the number). I can make a pull request if this approach makes sense.
from promptsource.
Here they are in hub:
https://huggingface.co/datasets/Zaid/coqa_expanded
https://huggingface.co/datasets/Zaid/quac_expanded
from promptsource.
I wrote ten GPT-3 style record prompts, where the model has to rank all the possible choices of the query sentence with every possible entity filled in. They will only make sense for rank eval. I can try to run eval on them before we cache. Not sure if it will help but worth a try. #490
from promptsource.
Datasets that are mostly done but we need to re-run eval and/or compute scores manually for all models:
- ReCoRD: Re-wrote prompts in a GPT-3 style "continuation choices" format; this raised the average score from ~20% to ~40% for EM. Still very far below GPT-3's performance.
- SQuAD v2: Added "respond unanswerable" to the prompts; the model did sometimes say "unanswerable" but only rarely, so the average score did not change (still about 40% for EM).
- DROP: Lintang added word/number normalization, this helped quite a bit.
- WebQuestions: When we run eval correctly, our score is reasonable.
- HellaSwag: We are just really bad at this dataset unless we train on it.
Datasets where there is still work to be done:
- CoQA: Zaid created an appropriate variant of the dataset and Stephen prompted it but we need to add it to the CSV file, cache them, and re-run eval.
- QuAC: Same as CoQA.
- Natural Questions: We need to run eval on the same test set (= original validation set) that has been used on open domain QA papers. This will take a little work.
- TriviaQA: We switched to the correct subset (unfiltered) and fixed the example filtering and re-cached, but we need to run inference on the test set and submit to the test server. Based on our validation results, we are very likely to significantly underperform GPT-3.
Datasets I don't know the status of:
- Lambada
- Winogrande
from promptsource.
Related Issues (20)
- Discussion on skipping examples
- Tests don't pass locally but pass on the GH Action CI
- The hosted version is down 😰 HOT 9
- Examples where >1 targets? HOT 2
- Not able to bring lambada templates HOT 2
- template marked as original_task fails to generate test data HOT 1
- 'metadata' is undefined for P3 data HOT 2
- when a prompt has many <mask>, how to design corresponding settings? HOT 1
- Possible to access other examples when prompting? HOT 2
- Regenerate all templates in unicode
- Merging xP3 & eval-hackathon HOT 2
- Possibly incorrect prompt HOT 1
- Possibility to use custom datasets HOT 12
- trec templates are broken HOT 3
- LICENSE doesn't mention year or copyright holder HOT 1
- website HOT 1
- How to specify the port and IP address when launging the app? HOT 1
- How to generate prompted example for each choice? HOT 2
- Promptsource cannot be installed in Python 3.10 HOT 2
- how can i get all the templates,expediently? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from promptsource.