aif360: Help Detect and Mitigate Bias in Machine Learning Models

The 'AI Fairness 360' <> toolkit is an open-source library to help detect and mitigate bias in machine learning models. The AI Fairness 360 R package includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

Version: 0.1.0
Imports: reticulate, rstudioapi
Suggests: testthat
Published: 2020-06-23
Author: Gabriela de Queiroz [aut], Stacey Ronaghan [aut], Saishruthi Swaminathan [aut, cre]
Maintainer: Saishruthi Swaminathan < at>
License: Apache License (≥ 2.0)
NeedsCompilation: no
Materials: README
CRAN checks: aif360 results


Reference manual: aif360.pdf
Package source: aif360_0.1.0.tar.gz
Windows binaries: r-devel:, r-release:, r-oldrel:
macOS binaries: r-release: aif360_0.1.0.tgz, r-oldrel: aif360_0.1.0.tgz


Please use the canonical form to link to this page.