AI Machine Learning & Data Science Research

Improving ML Fairness: IBM, UMich & ShanghaiTech Papers Focus on Statistical Inference and Gradient-Boosting

A team from University of Michigan, MIT-IBM Watson AI Lab and ShanghaiTech University publishes two papers on individual fairness for ML models, introducing a scale-free and interpretable statistically principled approach for assessing individual fairness and a method for enforcing individual fairness in gradient boosting suitable for non-smooth ML models.

Recent breakthroughs in machine learning (ML) have enabled AI systems to assume increasingly important roles in real-world decision-making. Studies have suggested however that such systems may be prone to biases that could result in discrimination against individuals on the basis of racial and gender characteristics.

In the 2011 paper Fairness Through Awareness, Cynthia Dwork et al. propose that an ML model lacks “individual fairness” if a pair of valid inputs which are otherwise close to each other (according to an appropriate metric) are treated differently by the model (different class label, or a large difference in output). The model is not considered biased if no such pairs exist.

In a bid to detect biases and increase individual fairness in ML models, a research team from the University of Michigan, MIT-IBM Watson AI Lab and ShanghaiTech University recently published two papers on the topic: Statistical Inference for Individual Fairness and Individually Fair Gradient Boosting.

In the first paper, Statistical Inference for Individual Fairness, the researchers propose a statistically principled approach to assessing the individual fairness of ML models. They also develop a suite of inference tools for the adversarial cost function that allows an investigator to calibrate the method, for example to prescribe a Type I error rate.

image.png

The researchers summarize their approach as follows:

  1. Generating unfair examples: by unfair example we mean an example that is similar to a training example, but treated differently by the ML models. Such examples are similar to adversarial examples, except they are only allowed to differ from a training example in certain protected or sensitive ways.
  2. Summarizing the behaviour of the ML model on unfair examples: We propose a loss-ratio based approach that is not only scale-free, but also interpretable. For classification problems, we propose a variation of our test based on the error rates ratio.

The team first uses a gradient flow-based approach to find unfair samples. The gradient flow attack solves a continuous-time ordinary differential equation, from which it is possible to extract an “unfair map” that maps samples in the data to similar areas of the sample space where the ML model performs poorly and identify areas where the model violates individual fairness.

They define their test statistic in terms of the unfair map, an approach they say has two main benefits:

  1. Computational tractability: Evaluating the unfair map is computationally tractable because integrating initial value problems (IVP) is a well-developed area of scientific computing.
  2. Reproducibility: By defining the test statistic algorithmically, we avoid ambiguity in the algorithm and initial iterate, thereby ensuring reproducibility.

The team verify their methodology by presenting a case study testing individual fairness on the Adult dataset. They perform tests using four classifiers: Baseline NN, Group fairness reductions algorithm, Individual fairness SenSR algorithm, and a basic project algorithm. They compared group fairness using average odds difference (AOD) for gender and race, where a significance level for null hypothesis rejection is 0.05 and δ = 1.25.

image.png

The results show that the baseline violated the individual fairness condition, while the basic project algorithm improved individual fairness but failed to pass the null hypothesis test. SenSR’s performance also preserved individual fairness. The experiments specifically demonstrate the proposed tools’ ability to reveal gender and racial biases in an income prediction model.

The team’s second paper, Individually Fair Gradient Boosting, focuses on enforcing individual fairness in gradient boosting. Gradient boosting is a popular method for tabular data problems that produce a prediction model in the form of an ensemble of weak prediction models, such as decision trees.

Existing approaches to enforcing individual fairness are either not suitable for training non-smooth ML models or perform poorly with flexible non-parametric ML models. To fill this gap, the proposed method is designed to handle non-smooth ML models.

image.png

The researchers summarize their main contributions as:

  1. We develop a method to enforce individual fairness in gradient boosting. Unlike other methods for enforcing individual fairness, our approach handles non-smooth ML models such as (boosted) decision trees.
  2. We show that the method converges globally and leads to ML models that are individually fair. We also show that it is possible to certify the individual fairness of the models a posteriori.
  3. We show empirically that our method preserves the accuracy of gradient boosting while improving widely used group and individual fairness metrics.

This work aims to train an ML model that is individually fair. The researchers enforce distributionally robust fairness, which asserts that an ML model should have similar performance on similar samples. To achieve this, they use adversarial learning to train an individually fair ML that is resistant to adversarial attacks.

The team also studies the convergence and generalization properties of fair gradient boosting, finding that it is possible to certify a posteriori that a non-smooth ML model is individually fair by checking the empirical performance gap and for practitioners to certify the worst-case performance differential of an ML model.

image.png
image.png

Finally, they apply fair gradient boosted trees (BuDRO) to three datasets: German credit dataset, Adult dataset and COMPAS recidivism prediction dataset.

image.png

On the German credit data set, the results show that GBDTs with XGBoost are the most accurate, outperforming the baseline neural network (NN). The BuDRO method meanwhile has the highest individual fairness (S-cons) while maintaining high accuracy.

image.png

On the Adult dataset, the GBDT method was again the most accurate. Although BuDRO was slightly less accurate than the baseline, the gender gaps have considerably shrunk. BuDRO also improved the accuracy of SenSR with similar individual fairness scores.

image.png

On the COMPAS recidivism prediction data set, BuDRO had similar accuracy compared with a neural network trained with SenSR, but achieved higher individual fairness, demonstrating the BuDRO approach’s effectiveness with regard to individual fairness.

In their first paper, the researchers developed a suite of inferential tools for detecting and measuring individual bias in ML models, which will enable investigators to assess a model’s individual fairness in an effective, statistically principled way. The second paper proposed a gradient boosting algorithm that can enforce individual fairness for non-smooth ML models and preserve gradient boosting accuracy while improving individual fairness.

The papers Statistical Inference for Individual Fairness and Individually Fair Gradient Boosting are on arXiv.


Author: Hecate He | Editor: Michael Sarazen


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

1 comment on “Improving ML Fairness: IBM, UMich & ShanghaiTech Papers Focus on Statistical Inference and Gradient-Boosting

  1. Pingback: [N] IBM, UMich & ShanghaiTech Papers Focus on Statistical Inference and Gradient-Boosting – ONEO AI

Leave a Reply

Your email address will not be published. Required fields are marked *