DualFair: Truthful Illustration Learning at Every Group and Specific particular person Ranges by way of Contrastive Self-supervision
Authors: Sungwon Han, Seungeon Lee, Fangzhao Wu, Sundong Kim, Chuhan Wu, Xiting Wang, Xing Xie, Meeyoung Cha
Abstract: Algorithmic fairness has become a necessary machine finding out draw back, notably for mission-critical Internet functions. This work presents a self-supervised model, often called DualFair, which will debias delicate attributes like gender and race from realized representations. In distinction to current fashions that focus on a single sort of fairness, our model collectively optimizes for two fairness requirements — group fairness and counterfactual fairness — and due to this fact makes fairer predictions at every the group and specific particular person ranges. Our model makes use of contrastive loss to generate embeddings which could be indistinguishable for each protected group, whereas forcing the embeddings of counterfactual pairs to be comparable. It then makes use of a self-knowledge distillation methodology to deal with the usual of illustration for the downstream duties. Intensive analysis over quite a few datasets confirms the model’s validity and extra reveals the synergy of collectively addressing two fairness requirements, suggesting the model’s potential value in truthful intelligent Internet functions