conversationai / unintended-ml-bias-analysis Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Produces:
IOError: [Errno 2] No such file or directory: '../models/cnn_debias_random_tox_v3_hparams.json'
In the Random model section.
Hello,
I am trying to access the wiki test, train, and validate files that were originally at https://raw.githubusercontent.com/conversationai/unintended-ml-bias-analysis/master/data/. Is there a way to reproduce these files?
Thanks,
At the moment, the instructions say to download the data, but the data directly in github already has the data. Its generally better to separate concerns, and have github only contain the code.
The data directory should be removed/emptied, and data in it should be added to the .gitignore so it doesn't accidentally get checked in again.
Currently, this uses the previously published figshare toxicity data. Let's update this to include our new labels on the Wikipedia data, including subtypes of toxicity.
We should try to build interpretable models (see "Attention is All You Need" or "Rationalizing Neural Predictions").
I am trying to do an initial run of Train_Toxicity_Model and am encountering the above error message at the final step:
debias_model.prep_data_and_score(debias_test['comment'], debias_test['is_toxic'])
Looking at model_tool.py, I cannot figure out what this step does, so I cannot guess how to edit the code to make it work. Please help!
Is it possible to add the tool to recreate dataset from the wikipedia dump? Or add the wikipedia_article_snippets.json
file to the repository?
Thanks.
Hello Authors ,
This is an impressive work and I'm trying hard from last couple weeks to setup an env to reproduce it, but I'm having hard time with the packages and its depedencies. Can you please provide me a working environment.
Issue Description:
Hello.
I have discovered a performance degradation in the read_csv function of pandas version 1.3.4 when handling CSV files with a large number of columns. This problem significantly increases the loading time from just a few seconds in the previous version 1.2.5 to several minutes, almost 60x diff. I found some discussions on GitHub related to this issue, including #44106 and #44192.
I found that archive/presentations/FAT_Star_Tutorial_Measuring_Unintended_Bias_in_Text_Classification_Models_with_Real_Data.ipynb
and archive/unintended_ml_bias/Train_Toxicity_Model.ipynb
both used the influenced api. There may be more files used the influenced api.
Steps to Reproduce:
I have created a small reproducible example to better illustrate this issue.
# v1.3.4
import os
import pandas
import numpy
import timeit
def generate_sample():
if os.path.exists("test_small.csv.gz") == False:
nb_col = 100000
nb_row = 5
feature_list = {'sample': ['s_' + str(i+1) for i in range(nb_row)]}
for i in range(nb_col):
feature_list.update({'feature_' + str(i+1): list(numpy.random.uniform(low=0, high=10, size=nb_row))})
df = pandas.DataFrame(feature_list)
df.to_csv("test_small.csv.gz", index=False, float_format="%.6f")
def load_csv_file():
col_names = pandas.read_csv("test_small.csv.gz", low_memory=False, nrows=1).columns
types_dict = {col: numpy.float32 for col in col_names}
types_dict.update({'sample': str})
feature_df = pandas.read_csv("test_small.csv.gz", index_col="sample", na_filter=False, dtype=types_dict, low_memory=False)
print("loaded dataframe shape:", feature_df.shape)
generate_sample()
timeit.timeit(load_csv_file, number=1)
# results
loaded dataframe shape: (5, 100000)
120.37690759263933
# v1.3.5
import os
import pandas
import numpy
import timeit
def generate_sample():
if os.path.exists("test_small.csv.gz") == False:
nb_col = 100000
nb_row = 5
feature_list = {'sample': ['s_' + str(i+1) for i in range(nb_row)]}
for i in range(nb_col):
feature_list.update({'feature_' + str(i+1): list(numpy.random.uniform(low=0, high=10, size=nb_row))})
df = pandas.DataFrame(feature_list)
df.to_csv("test_small.csv.gz", index=False, float_format="%.6f")
def load_csv_file():
col_names = pandas.read_csv("test_small.csv.gz", low_memory=False, nrows=1).columns
types_dict = {col: numpy.float32 for col in col_names}
types_dict.update({'sample': str})
feature_df = pandas.read_csv("test_small.csv.gz", index_col="sample", na_filter=False, dtype=types_dict, low_memory=False)
print("loaded dataframe shape:", feature_df.shape)
generate_sample()
timeit.timeit(load_csv_file, number=1)
# results
loaded dataframe shape: (5, 100000)
2.8567268839105964
Suggestion
I would recommend considering an upgrade to a different version of pandas >= 1.3.5 or exploring other solutions to optimize the performance of loading CSV files.
Any other workarounds or solutions would be greatly appreciated.
Thank you!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.