GithubHelp home page GithubHelp logo

oracle / guardian-ai Goto Github PK

View Code? Open in Web Editor NEW
42.0 8.0 9.0 474 KB

Oracle Guardian AI Open Source Project is a library consisting of tools to assess fairness/bias and privacy of machine learning models and data sets.

Home Page: https://oracle-guardian-ai.readthedocs.io

License: Universal Permissive License v1.0

Makefile 0.11% Python 99.89%
accelerated-data-science bias-mitigation fairness machine-learning oci oracle privacy responsible-ai

guardian-ai's Introduction

Oracle Guardian AI Open Source Project

PyPI Python Code style: black

Oracle Guardian AI Open Source Project is a library consisting of tools to assess fairness/bias and privacy of machine learning models and data sets. This package contains fairness and privacy_estimation modules.

The Fairness module offers tools to help you diagnose and understand the unintended bias present in your dataset and model so that you can make steps towards more inclusive and fair applications of machine learning.

The Privacy Estimation module helps estimate potential leakage of sensitive information in the training data through attacks on Machine Learning (ML) models. The main idea is to carry out Membership Inference Attacks on a given target model trained on a given sensitive dataset, and measure their success to estimate the risk of leakage.

Installation

You have various options when installing oracle-guardian-ai.

Installing the oracle-guardian-ai base package

python3 -m pip install oracle-guardian-ai

Installing extras libraries

The all-optional module will install all optional dependencies. Note the single quotes around installation of extra libraries.

python3 -m pip install 'oracle-guardian-ai[all-optional]'

To work with fairness/bias, install the fairness module. You can find extra dependencies in requirements-fairness.txt.

python3 -m pip install 'oracle-guardian-ai[fairness]'

To work with privacy estimation, install the privacy module. You can find extra dependencies in requirements-privacy.txt.

python3 -m pip install 'oracle-guardian-ai[privacy]'

Documentation

Examples

Measurement with a Fairness Metric

from guardian_ai.fairness.metrics import ModelStatisticalParityScorer
fairness_score = ModelStatisticalParityScorer(protected_attributes='<target_attribute>')

Bias Mitigation

from guardian_ai.fairness.bias_mitigation import ModelBiasMitigator
bias_mitigated_model = ModelBiasMitigator(
    model,
    protected_attribute_names='<target_attribute>',
    fairness_metric="statistical_parity",
    accuracy_metric="balanced_accuracy",
)

bias_mitigated_model.fit(X_val, y_val)
bias_mitigated_model.predict(X_test)

Contributing

This project welcomes contributions from the community. Before submitting a pull request, please review our contribution guide.

Find Getting Started instructions for developers in README-development.md.

Security

Consult the security guide SECURITY.md for our responsible security vulnerability disclosure process.

License

Copyright (c) 2023 Oracle and/or its affiliates. Licensed under the Universal Permissive License v1.0.

guardian-ai's People

Contributors

alina-yur avatar dependabot[bot] avatar ehsan-s avatar liudmylaru avatar mingkang111 avatar spavlusieva avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

guardian-ai's Issues

[FR]: Membership inference attacks in recommender systems

Willingness to contribute

Yes. I can contribute this feature independently.

Proposal Summary

This addition to the privacy estimation tool extends its capabilities to include membership
inference attacks specifically tailored for recommender systems. Similar to traditional membership inference attacks, the recommender system attacks analyze the prediction patterns of the model to infer membership status.

Motivation

What is the use case for this feature?

This enhancement extends the tool's applicability to privacy analysis in recommendation systems, ensuring a more comprehensive assessment of potential information leakage.

Why is this use case valuable to support for OCI DataScience users in general?

Any analysis done of recommender systems in OCI could utilize this tool to analyze a recommender's privacy protection capabilities.

Why is this use case valuable to support for your project(s) or organization?

As part of a recommender team, it is extremely important to understand how protected user data is. This tool will allow us to measure that privacy level.

Why is it currently difficult to achieve this use case?

The implementation of a membership inference attack in a recommender system relies upon the creation of a shadow model that is supposed to mimic the target model as closely as possible. Without direct access to training data, it is difficult to achieve this.

Details

This contribution will follow the format of the paper Zhang et al. Membership Inference Attacks Against Recommender Systems paper and will be contributed by Animesh Agarwal and Ian Hanus.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.