GithubHelp home page GithubHelp logo

jannik-brinkmann / social-biases-in-vision-transformers Goto Github PK

View Code? Open in Web Editor NEW
2.0 2.0 0.0 40 KB

Code associated with "A Multidimensional Analysis of Social Biases in Vision Transformers"

Home Page: https://arxiv.org/abs/2308.01948

License: MIT License

Python 96.61% Shell 3.39%
social-biases vision-transformers

social-biases-in-vision-transformers's Introduction

A Multidimensional Analysis of Social Biases in Vision Transformers

arXiv

This is the official implementation of "A Multidimensional Analysis of Social Biases in Vision Transformers" (Brinkmann et al., 2023).

The embedding spaces of image models have been shown to encode a range of social biases such as racism and sexism. Here, we investigate the specific factors that contribute to the emergence of these biases in Vision Transformers (ViT). Therefore, we measure the impact of training data, model architecture, and training objectives on social biases in the learned representations of ViTs. Our findings indicate that counterfactual augmentation training using diffusion-based image editing can mitigate biases, but does not eliminate them. Moreover, we find that larger models are less biased than smaller models, and that joint-embedding models are less biased than reconstruction-based models. In addition, we observe inconsistencies in the learned social biases. To our surprise, ViTs can exhibit opposite biases when trained on the same data set using different self-supervised training objectives. Our findings give insights into the factors that contribute to the emergence of social biases and suggests that we could achieve substantial fairness gains based on model design choices.

Requirements

To install requirements:

pip install -r requirements.txt

Datasets and Models

We use ImageNet-1k for the counterfactual augmentation training and the iEAT dataset to measure social biases in the embeddings. To generate textual descriptions of each image, we use CLIP Interrogator. Then, we generate counterfactual descriptions using the gender terms pairs of UCLA NLP and use those to generate counterfactual images using Diffusion-based Semantic Image Editing using Mask Guidance (see HuggingFace space).

We adopt HuggingFace's Transformers and Ross Wightman's Timm to support a range of different Vision Transformers. The models from the HuggingFace Hub are downloaded in the code. You can download the MoCo-v3 checkpoint at MoCo-v3.

Citation

@article{brinkmann2023socialbiases,
    title   = {A Multidimensional Analsis of Social Biases in Vision Transformers},
    author  = {Brinkmann, Jannik and Swoboda, Paul and Bartelt, Christian},
    journal = {arXiv preprint arXiv:2308.01948},
    year    = {2023}
}

social-biases-in-vision-transformers's People

Contributors

jannik-brinkmann avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar

social-biases-in-vision-transformers's Issues

Seeking clarification on data in Table 4

Hi Authors,

Thank you so much for the great work! I have a question regarding the data presented in Table 4, the iEAT effect sizes for different models. I noticed that for some models and in some tests, such as ViT-MSN-B, T12, the effect size is -1.09, written in bold, suggesting that the effect is significant. May I know how the significance level was calculated? Did you follow the iEAT one-sided test, calculate the p-value as ๐‘ƒ๐‘Ÿ[๐‘ (๐‘‹๐‘–, ๐‘Œ๐‘–, ๐ด, ๐ต) > ๐‘ (๐‘‹, ๐‘Œ, ๐ด, ๐ต)]? If this is the case, I am wondering if there is some contradiction. If I understand correctly, the permutation test that calculates ๐‘ƒ๐‘Ÿ[๐‘ (๐‘‹๐‘–, ๐‘Œ๐‘–, ๐ด, ๐ต) > ๐‘ (๐‘‹, ๐‘Œ, ๐ด, ๐ต)] has the alternative hypothesis that X is more associated with A than Y, because this is a one-sided test. So the effect being significant suggests that X is more associated with A than Y. However, the effect size, being -1.09, has a negative sign, suggesting that Y is more associated with A than X instead. There seems to be a contradiction, which is a bit confusing. May I know how we should interpret the results like -1.09? Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.