<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Maity, Subha</style></author><author><style face="normal" font="default" size="100%">Mukherjee, Debarghya</style></author><author><style face="normal" font="default" size="100%">Yurochkin, Mikhail</style></author><author><style face="normal" font="default" size="100%">Sun, Yuekai</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Does enforcing fairness mitigate biases caused by subpopulation shift?</style></title><secondary-title><style face="normal" font="default" size="100%">Advances in Neural Information Processing Systems</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2021</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://proceedings.neurips.cc/paper/2021/hash/d800149d2f947ad4d64f34668f8b20f6-Abstract.html</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Curran Associates, Inc.</style></publisher><volume><style face="normal" font="default" size="100%">34</style></volume><pages><style face="normal" font="default" size="100%">25773–25784</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Many instances of algorithmic bias are caused by subpopulation shifts. For example, ML models often perform worse on demographic groups that are underrepresented in the training data. In this paper, we study whether enforcing algorithmic fairness during training improves the performance of the trained model in the \textbackslashemph{target domain}. On one hand, we conceive scenarios in which enforcing fairness does not improve performance in the target domain. In fact, it may even harm performance. On the other hand, we derive necessary and sufficient conditions under which enforcing algorithmic fairness leads to the Bayes model in the target domain. We also illustrate the practical implications of our theoretical results in simulations and on real data.</style></abstract></record></records></xml>