GithubHelp home page GithubHelp logo

manestay / borderlines Goto Github PK

View Code? Open in Web Editor NEW
2.0 1.0 1.0 159 KB

Repository for the NAACL 2024 paper "This Land is {Your, My} Land: Evaluating Geopolitical Biases in Language Models"

Home Page: https://arxiv.org/abs/2305.14610

Python 82.03% Shell 1.30% BrighterScript 0.08% Emacs Lisp 0.16% Erlang 0.83% Hy 1.51% Modula-3 0.55% MAXScript 1.14% Nearley 1.58% NewLisp 0.41% Ruby 4.76% Slash 0.18% Sway 1.98% UrWeb 1.20% Bikeshed 0.15% JavaScript 2.15%

borderlines's Introduction

BorderLines Dataset of Territorial Disputes

Code and data for the NAACL 2024 paper "This Land is {Your, My} Land: Evaluating Geopolitical Biases in Language Models through Territorial Disputes".

I. Using BorderLines Dataset

The entire dataset consists of 3 separate datasets: A) the disputed territories (a.k.a. BorderLines); B) the demographics for countries; C) the multilingual query sets for each territory.

You can obtain the dataset by running either option 1, loading from the datasets hub, or option 2, cloning the repository.

1. Load from Datasets Hub

BorderLines is available in the datasets hub. Load by running:

import datasets

# load disputed territories
territories = datasets.load_dataset('manestay/borderlines', 'territories')['train']

# load country demographics
countries = datasets.load_dataset('manestay/borderlines', 'countries')['train']

# load queries in 49 languages
queries = datasets.load_dataset('manestay/borderlines', 'queries')

Note: the above code is included in the function load_borderlines_hf of file run_gpt/lib.py.

2. Clone this repository

In this repository, we include the data files for the default version of BorderLines (2023-05-15), which is based on the 2023-05-15 article.

The files are:

  • disputed_territories.csv: the main BorderLines territorial dispute table
  • countries_info.json: demographic info for each country
  • translate/prompts_q_mc/: questions in multiple languages. For example prompts.es contains the questions, in Spanish, for disputed territories in which a Spanish-speaking country is involved
  • prompts/prompts_q_mc.txt: multiple-choice questions, in English, for each disputed territory. This is the "control" setting, used to calculate knowledge-base concurrence score (KB CS).

Data files to datasets format

To use cloned data files with the evaluation scripts, convert them into the datasets format using:

python scripts/borderlines_to_datasets_format.py -o datasets/v1 -p prompts/prompts_q_mc.txt -td translate/terms -tp disputed_territories.csv -ip countries_info.json -pd translate/prompts_mc_q

II. Recreating BorderLines dataset (OPTIONAL)

If you want to reproduce the dataset, see RECREATE.md. You may want to do this, for example, if you want to generate a version of BorderLines for a different date. Otherwise, skip to III.

Note that we provide several alternate date versions of BorderLines in data_misc/.

III. Evaluation Suite on BorderLines

1. Run inference for language models

NOTE: The below commands run on BorderLines v1, downloaded from the datasets hub. If you are running on a local version (i.e. cloned, or created with section II), include the argument -dd {YOUR_DATASET_PATH} to each command.

A. GPT-3 inference

For GPT-3 models, we use rank classification. This means that given a query, and choices A and B, we concatenate each choice {query + A, query + B}, calculate the probability of either prompt, and assign the more likely one as the model's response.

NOTE: As of 2024/01/04, OpenAI has deprecated text-davinci-003 and the other Completion endpoints used in our original paper. We recommend using davinci-002, as shown below.

To run:

# run English and multilingual prompts
python run_gpt/run_inference_rank.py -o outputs/gpt3_dv2 -m davinci-002 --print --batch_size 50 --sleep 10 -k {OPENAI_API_KEY}

Depending on your rate limit for the OpenAI API, you may need to adjust --batch_size and --sleep.

B. Local model inference

For local models (BLOOM, T0, etc.), we use rank classification. This is implemented in rank_outputs/:

# run English and multilingual prompts
python rank_outputs/main.py -o outputs/bloomz-560m -m bigscience/bloomz-560m --batch_size 24

# run for 7b1, bloom, etc

c. GPT-4 inference

For GPT-4, we use a parsing approach. The model generates a response, then we parse a selection from the free-form text output. This allows us to perform our prompt modification experiments.

Run on the 4 system prompt configurations:

for PROMPT in vanilla nationalist un_peacekeeper input_demo ; do
  echo python run_gpt/run_inference.py -o outputs/gpt4/$PROMPT -m gpt-4 --system run_gpt/system_prompts/$PROMPT.txt --sleep 0
done

2. Evaluate!

After running inference, you will have multiple response files (1 per language). Combine them into a response table by running:

# run for GPT-3
python gen_response_table.py -rd outputs/gpt3_dv2

# run for BLOOMZ 560M
python gen_response_table.py -rd outputs/bloomz-560m

# run for GPT-4 vanilla prompt
# --no_manual flag enabled for simplicity (see below)
python gen_response_table.py -rd outputs/gpt4-0314/vanilla --no_manual

# modify args for outputs from other models and prompts

Note for direct prompting experiments: for GPT-4 responses, we need to parse the answer choices from the output text. The gen_response_table.py script will attempt to automatically parse at first. Then,

  • If the flag --no_manual is ABSENT, the script will ask the user to "Make a choice" for responses where this fails. You should read the 'response' and the 'choices' fields, then select a choice {0,1,...}.
  • If the flag --no_manual is PRESENT, the script will attempt to match the choice that first appears in the responses. After, it will select the 0-th index.

3. Analyze concurrence scores

Calculate the CS scores, as seen in Table 2 of the paper:

python calculate_CS.py outputs/gpt3_dv2/response_table.csv

python calculate_CS.py outputs/bloomz-560m/response_table.csv

python calculate_CS.py outputs/gpt4-0314/vanilla/response_table.csv

# modify args for outputs from other models and prompts

Citation

@article{li2024land,
      title={This Land is \{Your, My\} Land: Evaluating Geopolitical Biases in Language Models through Territorial Disputes},
      author={Bryan Li and Samar Haider and Chris Callison-Burch},
      year={2024},
      journal={2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)}
}

borderlines's People

Contributors

manestay avatar agasheadwait avatar

Stargazers

Miranda Miao avatar  avatar

Watchers

 avatar

Forkers

koanatakiyo

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.