Ermon Group Blog

Home             About

Controllable Fairness in Machine Learning

This post is an overview of our AISTATS 2019 paper
Learning Controllable Fair Representations
Jiaming Song*, Pratyusha Kalluri*, Aditya Grover, Shengjia Zhao, Stefano Ermon

Machine learning systems are increasingly used during high-stakes decisions, influencing credit scores, criminal sentences, and more. This raises an urgent question: how do we ensure these systems do not discriminate based on race, gender, disability, or other minority status? Many researchers have responded by introducing fair machine learning models that balance accuracy and fairness; but this leaves it up to institutions — corporations, governments, etc. — to choose to use these fair models, when some of these instutitions may be agnostic or even adversarial to fairness.

Interestingly, some researchers have introduced methods for learning fair representations (Madras et al. 2018). Using such methods, a party who is concerned with fairness (like a data collector, community organizer, or regulatory body) can convert data to fair representations, then release only the representations, making it much more difficult for any downstream machine learning models to discriminate. In this post, we introduce a theoretically grounded approach to learning fair representations, and we discover that a range of existing methods are special cases of our approach.

We note that all existing methods for learning fair representations can be said to balance usefulness and fairness, producing somewhat-useful-and-somewhat-fair representations. The concerned party must then run the learning process many times until they find representations they find satisfying. Based on our theoretical approach, we introduce a new method where the concerned party can control the fairness of representations by requesting specific limits on unfairness. Compared to earlier fair representations, ours can be learned more quickly, are able to satisfy requests for many notions of fairness simultaneously, and contain more useful information.

A theoretical approach to fair representations

We assume we are given a set of data points (\(x​\)), typically representing people, and their sensitive attributes (\(u​\)), typically their race, gender, or other minority status. We must learn a model (\(q_\phi​\)) mapping any data point to a new representation (\(z​\)). Our goal is two-fold: our representations should be expressive — containing plenty of useful information about the data point; and our representations should be fair — containing limited information about the sensititve attributes, so it is difficult to discriminate downstream1. Note that merely removing sensitive attributes (e.g. race) from the data would not satisfy this notion of fairness, as downstream machine learning models could then discriminate based on correlated features (e.g. zipcode) — a practice called “redlining”.

First, we translate our goal into the information theoretical concept of mutual information. The mutual information between two variables is formally defined as the Kullback-Leibler divergence between the joint probability of the variables and the product of the marginals of the variables: \(I(u;v) = D_{KL}(P_{(u,v)}\mid\mid P_u \otimes P_v)\); intuitively, it’s the amount of information that is shared. Our goals can be made concrete as follows:

  • To achieve expressiveness, we want to maximize the mutual information between the data point \(x\) and the representation \(z\) conditioned on the sensitive attributes \(u\): \(\max I(x;z \mid u)\). (By conditioning on the sensitive attributes, we make sure we do not encourage information in the data point that is correlated with the sensitive attributes to appear in our representation.)
  • To achieve fairness, we want to limit the mutual information between the representation \(z\) and the sensitive attributes \(u\): \(I(z;u)<\epsilon\), where \(\epsilon\) has been set by the concerned party.

Next, because both mutual information terms are difficult to optimize, we need to find approximations:

  • Instead of maximizing \(I(x;z \mid u)\), we maximize a lower bound \(-L_r \leq I(x;z \mid u)\), which relies on us introducing a new model \(p_\theta(x \mid z,u)\). Intuitively, maximizing \(-L_r\) encourages a mapping such that the new model \(p_\theta\) that takes the representation \(z\) plus the sensitive attributes \(u\) can successfully reconstruct the data point \(x​\).
  • Instead of constraining \(I(z;u)\), we can constrain an upper bound \(C_1 \geq I(z;u)\). Intuitively, constraining \(C_1\) discourages complex representations.

    Or, we can alternatively constrain \(C_2\), a tighter approximation of \(I(z;u)\), which relies on us introducing a new model \(p_\psi(u \mid z)\). Intuitively, constraining \(C_2\) discourages a mapping where the new model \(p_\psi\) that takes the representation \(z\) is able to reconstruct the sensitive attributes \(u\).

Putting it all together, our final objective is to find the models \(q_\phi\), \(p_\theta\), and \(p_\psi\) that encourage the successful reconstruction of the data points, while constraining the complexity of the representations, and constraining the reconstruction of the sensitive attributes:

Our “hard-constrained” objective for learning fair representations
\(\min_{\theta,\phi}\max_{\psi}\) \(L_r\) \(\text{s.t. }\) \(C_1< \epsilon_1\), \(C_2< \epsilon_2\)

where \(\epsilon_1\) and \(\epsilon_2\) are limits that have been set by the concerned party.

This gives us a principled approach to learning fair representations. And we are rewarded with a neat discovery: it turns out that a range of existing methods for learning fair representations optimize the dual — a “soft-regularized” version — of our objective!

The “soft-regularized” loss function for learning fair representations
\(\min_{\theta,\phi}\max_{\psi}\) \(L_r\)\(+\)\(\lambda_1 C_1\)\(+\)\(\lambda_2 C_2\)


Existing methods The \(\lambda_1\) they use The \(\lambda_2\) they use
(Zemel et al. 2013) \(0\) \(A_z/A_x\)
(Edwards and Storkey 2015) \(0\) \(\alpha/\beta\)
(Madras et al. 2018) \(0\) \(\gamma/\beta\)
(Louizos et al. 2015) \(1\) \(\beta\)

We see that our framework generalizes a range of existing methods!

Learning controllable fair representations

Let’s now take a closer look at the “soft-regularized” loss function. It should feel intuitive that existing methods for learning fair representations produce somewhat-useful-and-somewhat-fair representations, with the balance between expressiveness and fairness controlled by the choice of \(\lambda​\)s. If only we could optimize our “hard-constrained” objective instead; then the concerned party could instead set \(\epsilon​\) to request specific limits on unfairness . . .

Luckily, there’s a way! We introduce:

Our loss function for learning controllable fair representations
\(\max_{\lambda \geq 0}\min_{\theta,\phi}\max_{\psi}\)\(L_r\)\(+\)\(\lambda_1 (C_1-\epsilon_1)\)\(+\)\(\lambda_2 (C_2-\epsilon_2)\)

Intuitively, this loss function dictates that whenever we should be concerned about unfairness because \(C_1 > \epsilon_1\) or \(C_2 > \epsilon_2\), the \(\lambda\)s will place additional emphasis on the unsatisfied constraint; this additional emphasis will persist until \(C_1\) and \(C_2\) return to satisfying the limits \(\epsilon\) set by the concerned party. The rest of the time, when \(C_1\) and \(C_2\) are safely within the limits, minimizing \(L_r\) will be prioritized, encouraging expressive representations.

Results

With this last piece of the puzzle in place, all that’s left to do is evaluate whether our theory leads to learning controllable fair representations in practice. To evaluate, we learn representations of three real-world datasets:

  • the UCI German credit dataset of 1,000 individuals, where the binary sensitive attribute age<50 / age>50 was to be protected
  • the UCI Adult dataset of 40,000 adults from the US Census, where the binary sensitive attribute Man / Woman2 was to be protected
  • and the Heritage Health dataset of 60,000 patients, where the sensitive attribute to be protected was the intersection of age and gender: the age-group (of 9 possible age-groups) \(\times\) the gender (Man / Woman)2

Sure enough, our results confirm that, in all three sets of learned representations, the concerned party’s choices for \(\epsilon_1​\) and \(\epsilon_2​\) control the approximations of unfairness \(C_1​\) and \(C_2​\):

For all three datasets, we learn representations such that
\(C_1 \approx \epsilon_1\) is satisfied and \(C_2 \approx \epsilon_2\) is satisfied.

Our results also demonstrate that, compared to existing methods, our method can produce more expressive representations:

For a range of constraints \(\epsilon_2\), our method (dark blue) learns
more expressive representations than existing methods (light blue).

And our method is able to take care of many notions of fairness simultaneously.

  \(I(x;z \mid u)\) \(C_1\) \(C_2\) \(I_{EO}\) \(I_{EOpp}\)
  (higher is expressive) (lower is fairer) (lower is fairer) (lower is fairer) (lower is fairer)
constraints   < 10 < .1 < .1 < .1
our method 9.94 9.95 0.08 0.09 0.04
existing methods 9.34 9.39 0.09 0.10 0.07

When learning representations of the Adult dataset that satisfy many fairness constraints (on demographic parity, equality of odds, and equality of opportunity), our method learns representations that are more expressive and are better on all but one measure of fairness.

While these last two results may seem nice but surprising, they occur because existing methods require the concerned party to run the learning process many times until they find representations they find roughly satisfying, while our method directly optimizes for the representations that are as expressive as possible while equally satisfying all of the concerned party’s limits on unfairness of the representations.

Takeaways

To complement fair machine learning models that corporations and governments can choose to use, this work takes a step towards putting control of fair machine learning in the hands of a party concerned with fairness, such as a data collector, community organizer, or regulatory body. We contribute a theoretical approach to learning fair representations that make it much more difficult for downstream machine learning models to discriminate, and we contribute a new method that allows the concerned party to control the fairness of the representations by requesting specific limits on unfairness, \(\epsilon\).

When working on fair machine learning, it is particularly important to acknowledge limitations and blind-spots; or we risk building toy solutions, while overshadowing others’ work towards equity. A major limitation of our work is that the concerned party’s \(\epsilon\) limits an appoximation of unfairness, and we hope that future work can go further and map \(\epsilon\) to formal guarantees about the fairness of downstream machine learning. Another potential limitation of this work is that we, like much of the fair machine learning community, center demographic parity, equality of odds, and equality of opportunity notions of fairness. We believe that future work will need to develop deeper connections to social-justice-informed notions of equity if it is to avoid shallow technosolutionism and build more equitable machine learning (Onuoha and Cyborg 2018).

Footnotes
  1. For conciseness, we focus on demographic parity, a pretty intuitive and strict notion of fairness, but our approach works with many notions of fairness, as shown in the final table. 

  2. Gender is not binary, and the treatment of gender as binary when using these datasets is problematic and a limitation of this work.  2

References
  1. Madras, David, Elliot Creager, Toniann Pitassi, and Richard Zemel. 2018. “Learning Adversarially Fair and Transferable Representations,” February.
  2. Zemel, Rich, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. “Learning Fair Representations.” In International Conference on Machine Learning, 325–33.
  3. Edwards, Harrison, and Amos Storkey. 2015. “Censoring Representations with an Adversary.” ArXiv Preprint ArXiv:1511.05897, November.
  4. Louizos, Christos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. 2015. “The Variational Fair Autoencoder.” ArXiv Preprint ArXiv:1511.00830, November.
  5. Onuoha, Mimi, and Mother Cyborg. 2018. A People’s Guide to AI. Allied Media Projects.
comments powered by Disqus