GithubHelp home page GithubHelp logo

unintended-ml-bias-analysis's Introduction

Unintended ML Bias Analysis

This repository contains the Sentence Templates datasets we use to evaluate and mitigate unintended machine learning bias in Perspective API. See our accompanying blog post to learn more about how we created these datasets.

This work is part of the Conversation AI project, a collaborative research effort exploring ML as a tool for better discussions online.

NOTE: We moved outdated scripts, notebooks, and other resources to the archive subdirectory. We no longer maintain those resources, but you may find some of the content helpful. In particular, see model_bias_analysis.py for an example of how to analyze model bias.

Background

As part of the Perspective API model training process, we evaluate identity-term bias in our models on synthetically generated and “templated” test sets. To generate these sets, we plug in identity terms into both toxic and non-toxic template sentences. For example, given templates like “I am a <modifier> <identity>”, we evaluate differences in score on sentences like:

“I am a kind American"

“I am a kind Muslim"

Scores that vary significantly may indicate identity term bias within the model.

For more reading on unintended bias and how we measure bias using the resulting model scores, see:

Usage

We encourage researchers and developers to use these datasets to test for biases in their own models. However, Sentence Templates alone are insufficient for eliminating identity bias in machine learning language models. The examples are simple and unlikely to appear in real-world data and may reflect our own biases. The identity terms also vary across languages because direct word-for-word translation of identity terms across languages is not sufficient, or even possible, given differences in cultures, religions, idioms, and identities.

Copyright and license

All code in this repository is made available under the Apache 2 license. All data in this repository is made available under the Creative Commons Attribution 4.0 International license (CC By 4.0). A full copy of the license can be found at https://creativecommons.org/licenses/by/4.0/

unintended-ml-bias-analysis's People

Contributors

alyssachvasta avatar dborkan avatar dependabot[bot] avatar dslucas avatar g8a9 avatar idk3 avatar iislucas avatar jetpack avatar lucyvasserman avatar mariepellat avatar nthain avatar qiongyu avatar rubinovitz avatar seungwubaek avatar sorensenjs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unintended-ml-bias-analysis's Issues

Remove data from the github repo & fix .gitignore so it doesn't come back

At the moment, the instructions say to download the data, but the data directly in github already has the data. Its generally better to separate concerns, and have github only contain the code.

The data directory should be removed/emptied, and data in it should be added to the .gitignore so it doesn't accidentally get checked in again.

'ToxModel' object has no attribute 'prep_data_and_score'

I am trying to do an initial run of Train_Toxicity_Model and am encountering the above error message at the final step:

debias_model.prep_data_and_score(debias_test['comment'], debias_test['is_toxic'])

Looking at model_tool.py, I cannot figure out what this step does, so I cannot guess how to edit the code to make it work. Please help!

Performance Issue: Slow read_csv() Function with pandas Version 1.3.4 for CSV Files

Issue Description:
Hello.
I have discovered a performance degradation in the read_csv function of pandas version 1.3.4 when handling CSV files with a large number of columns. This problem significantly increases the loading time from just a few seconds in the previous version 1.2.5 to several minutes, almost 60x diff. I found some discussions on GitHub related to this issue, including #44106 and #44192.
I found that archive/presentations/FAT_Star_Tutorial_Measuring_Unintended_Bias_in_Text_Classification_Models_with_Real_Data.ipynb and archive/unintended_ml_bias/Train_Toxicity_Model.ipynb both used the influenced api. There may be more files used the influenced api.

Steps to Reproduce:

I have created a small reproducible example to better illustrate this issue.

# v1.3.4
import os
import pandas
import numpy
import timeit

def generate_sample():
    if os.path.exists("test_small.csv.gz") == False:
        nb_col = 100000
        nb_row = 5
        feature_list = {'sample': ['s_' + str(i+1) for i in range(nb_row)]}
        for i in range(nb_col):
            feature_list.update({'feature_' + str(i+1): list(numpy.random.uniform(low=0, high=10, size=nb_row))})
        df = pandas.DataFrame(feature_list)
        df.to_csv("test_small.csv.gz", index=False, float_format="%.6f")

def load_csv_file():
    col_names = pandas.read_csv("test_small.csv.gz", low_memory=False, nrows=1).columns
    types_dict = {col: numpy.float32 for col in col_names}
    types_dict.update({'sample': str})
    feature_df = pandas.read_csv("test_small.csv.gz", index_col="sample", na_filter=False, dtype=types_dict, low_memory=False)
    print("loaded dataframe shape:", feature_df.shape)

generate_sample()
timeit.timeit(load_csv_file, number=1)

# results
loaded dataframe shape: (5, 100000)
120.37690759263933
# v1.3.5
import os
import pandas
import numpy
import timeit

def generate_sample():
    if os.path.exists("test_small.csv.gz") == False:
        nb_col = 100000
        nb_row = 5
        feature_list = {'sample': ['s_' + str(i+1) for i in range(nb_row)]}
        for i in range(nb_col):
            feature_list.update({'feature_' + str(i+1): list(numpy.random.uniform(low=0, high=10, size=nb_row))})
        df = pandas.DataFrame(feature_list)
        df.to_csv("test_small.csv.gz", index=False, float_format="%.6f")

def load_csv_file():
    col_names = pandas.read_csv("test_small.csv.gz", low_memory=False, nrows=1).columns
    types_dict = {col: numpy.float32 for col in col_names}
    types_dict.update({'sample': str})
    feature_df = pandas.read_csv("test_small.csv.gz", index_col="sample", na_filter=False, dtype=types_dict, low_memory=False)
    print("loaded dataframe shape:", feature_df.shape)


generate_sample()
timeit.timeit(load_csv_file, number=1)

# results
loaded dataframe shape: (5, 100000)
2.8567268839105964

Suggestion

I would recommend considering an upgrade to a different version of pandas >= 1.3.5 or exploring other solutions to optimize the performance of loading CSV files.
Any other workarounds or solutions would be greatly appreciated.
Thank you!

Missing Datasets

Hi, I am trying to do code reproducibility of the project. But I noticed many dataset files are missing. Request you to please provide all the datasets needed.

Screenshot 2024-02-16 at 15 32 07

Tool to recreate dataset for debiasing

Is it possible to add the tool to recreate dataset from the wikipedia dump? Or add the wikipedia_article_snippets.json file to the repository?

Thanks.

Working environment for the repo

Hello Authors ,

This is an impressive work and I'm trying hard from last couple weeks to setup an env to reproduce it, but I'm having hard time with the packages and its depedencies. Can you please provide me a working environment.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.