An Investigation of Representation and Allocation Harms in Contrastive Learning

Citation:

Maity, S. , Agarwal, M. , Yurochkin, M. , & Sun, Y. . (2024). An Investigation of Representation and Allocation Harms in Contrastive Learning. In The Twelfth International Conference on Learning Representations. Retrieved from https://openreview.net/forum?id=q4SiDyYQbo

Abstract:

The effect of underrepresentation on the performance of minority groups is known to be a serious problem in supervised learning settings; however, it has been underexplored so far in the context of self-supervised learning (SSL). In this paper, we demonstrate that contrastive learning (CL), a popular variant of SSL, tends to collapse representations of minority groups with certain majority groups. We refer to this phenomenon as representation harm and demonstrate it on image and text datasets using the corresponding popular CL methods. Furthermore, our causal mediation analysis of allocation harm on a downstream classification task reveals that representation harm is partly responsible for it, thus emphasizing the importance of studying and mitigating representation harm. Finally, we provide a theoretical explanation for representation harm using a stochastic block model that leads to a representational neural collapse in a contrastive learning setting.

Notes:

Publisher's Version

Last updated on 12/07/2024