Mitigating Demographic Bias in Facial Recognition through Adversarial Representation Learning: A Publication-Quality Data-Driven Study

Abstract

The presence of demographic bias in facial recognition systems constitutes a critical obstacle for the advancement and implementation of artificial intelligence, carrying profound social and ethical consequences. This study offers a thorough and clear assessment of adversarial representation learning aimed at reducing demographic bias, based on a solid, publication-standard dataset. We illustrate that all demographic groups exhibit genuine, non-uniform deficiencies, and no group attains flawless performance, thereby mirroring real-world limitations. Utilising a debiased model, we observe improvements, though not full equalisation, across all demographic groups. Our findings are underpinned by meticulous statistical analysis, striving to establish a benchmark for equity research in AI. We investigate the complex ethical and social consequences and provide important information for legislators and practitioners about the implementation of just facial recognition systems. Moreover, we investigate the possible dual-use hazards and social consequences of improved facial recognition technology, therefore stressing the need for both technical and legislative actions to prevent abuse in other morally sensitive environments, including surveillance.

Country : Iraq

1 Ali A. Al-Arbo2 Younis Al-Arbo

  1. Department of English Language, College of Arts, University of Mosul, Nineveh, Iraq
  2. Department of Computer Science, College of Education for Pure Science, University of Mosul, Nineveh, Iraq

IRJIET, Volume 9, Issue 6, June 2025 pp. 264-271

doi.org/10.47001/IRJIET/2025.906035

References

  1. Ada Lovelace Institute, Beyond face value: public attitudes to facial recognition technology, 2019. [Online]. Available: https://www.adalovelaceinstitute.org/report/beyond-face-value-public-attitudes-to-facial-recognition-technology/
  2. S. Barocas, M. Hardt, and A. Narayanan, Fairness and machine learning. Cambridge, MA: MIT Press, 2023. [Online]. Available: https://fairmlbook.org/
  3. J. Buolamwini and T. Gebru, “Gender shades: Intersectional accuracy disparities in commercial gender classification,” in Proc. 1st Conf. Fairness, Accountability and Transparency, 2018, pp. 77–91. [Online]. Available: https://proceedings.mlr.press/v81/buolamwini18a.html
  4. A.Chouldechova, “Fair prediction with disparate impact: A study of bias in recidivism prediction instruments,” Big Data, vol. 5, no. 2, pp. 153–163, 2017. [Online]. Available: https://doi.org/10.1089/big.2016.0047
  5. A.G. Ferguson, The rise of big data policing: Surveillance, race, and the future of law enforcement. New York, NY: NYU Press, 2017. [Online]. Available: https://nyupress.org/9781479892822/the-rise-of-big-data-policing/
  6. S. A. Friedler et al., “A comparative study of fairness-enhancing interventions in machine learning,” in Proc. Conf. Fairness, Accountability, and Transparency, 2019, pp. 329–338. [Online]. Available: https://doi.org/10.1145/3287560.3287589
  7. C. Garvie, A. Bedoya, and J. Frankle, The perpetual lineup: Unregulated police face recognition in America. Georgetown Law Center on Privacy & Technology, 2016. [Online]. Available: https://www.perpetuallineup.org/
  8. Y. Ganin et al., “Domain-adversarial training of neural networks,” J. Mach. Learn. Res., vol. 17, no. 1, pp. 1–35, 2016. [Online]. Available: https://jmlr.org/papers/v17/15-239.html
  9. P. Grother, M. Ngan, and K. Hanaoka, Face recognition vendor test (FRVT) part 3: Demographic effects, NIST Interagency Report 8280, 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf
  10. M. Hardt, E. Price, and N. Srebro, “Equality of opportunity in supervised learning,” in Adv. Neural Inf. Process. Syst., vol. 29, pp. 3315–3323, 2016. [Online]. Available: https://proceedings.neurips.cc/paper/2016/hash/9d2682367c3935defcb1f9e247a97c0d-Abstract.html
  11. K. Holstein et al., “Improving fairness in machine learning systems: What do industry practitioners need?” in Proc. 2019 CHI Conf. Human Factors Comput. Syst., 2019, pp. 1–16. [Online]. Available: https://doi.org/10.1145/3290605.3300830
  12. J. J. Howard et al., “Evaluating proposed fairness models for face recognition algorithms,” arXiv preprint arXiv:2203.05051, 2021. [Online]. Available: https://doi.org/10.48550/arXiv.2203.05051
  13. K. Karkkainen and J. Joo, “FairFace: Face attribute dataset for balanced race, gender, and age,” in Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis., 2021, pp. 1548–1558.
  14. B. F. Klare et al., “Face recognition performance: Role of demographic information,” IEEE Trans. Inf. Forensics Security, vol. 7, no. 6, pp. 1789–1801, 2012.
  15. A.Kortylewski et al., “Training deep face recognition systems with synthetic data,” arXiv preprint arXiv:1802.05891, 2018. [Online]. Available: https://doi.org/10.48550/arXiv.1802.05891
  16. H. Liang, P. Perona, and G. Balakrishnan, “Benchmarking algorithmic bias in face recognition: An experimental approach using synthetic faces and human evaluation,” arXiv preprint arXiv:2302.01588, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2308.05441
  17. N. Mehrabi et al., “A survey on bias and fairness in machine learning,” ACM Comput. Surv., vol. 54, no. 6, pp. 1–35, 2021. [Online]. Available: https://doi.org/10.1145/3457607
  18. H. Nicole et al., “On the genealogy of machine learning datasets: A critical history of ImageNet,” Big Data Soc., vol. 8, no. 2, pp. 1–15, 2021. [Online]. Available: https://doi.org/10.1177/20539517211035955
  19. NIST, Towards a standard for identifying and managing bias in artificial intelligence (NIST Special Publication 1270), 2022. [Online]. Available: https://doi.org/10.6028/NIST.SP.1270
  20. D. Pessach and E. Shmueli, “Algorithmic fairness,” arXiv preprint arXiv:2001.09784, 2020. [Online]. Available: https://doi.org/10.48550/arXiv.2001.09784
  21. J. Pfeiffer et al., “Algorithmic fairness in AI,” Bus. Inf. Syst. Eng., vol. 65, pp. 209–222, 2023. [Online]. Available: https://doi.org/10.1007/s12599-023-00787-x
  22. I.D. Raji et al., “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proc. AAAI/ACM Conf. AI, Ethics, and Society, 2020, pp. 145–151. [Online]. Available: https://doi.org/10.1145/3375627.3375820
  23. I.Serna et al., “InsideBias: Measuring bias in deep networks and application to face gender biometrics,” arXiv preprint arXiv:2004.06592, 2020. [Online]. Available: https://doi.org/10.48550/arXiv.2004.06592
  24. U.S. White House, “Executive order on the safe, secure, and trustworthy development and use of artificial intelligence,” 2023. [Online]. Available: https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
  25. M. Wang, W. Deng, J. Hu, X. Tao, and Y. Huang, “Racial faces in the wild: Reducing racial bias by information maximization adaptation network,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 692–702. [Online]. Available: https://doi.org/10.48550/arXiv.1812.00194
  26. M. Whittaker et al., AI Now Report 2018. AI Now Institute, 2018. [Online]. Available: https://ainowinstitute.org/AI_Now_2018_Report.pdf
  27. B. H. Zhang, B. Lemoine, and M. Mitchell, “Mitigating unwanted biases with adversarial learning,” in Proc. 2018 AAAI/ACM Conf. AI, Ethics, and Society, 2018, pp. 335–340. [Online]. Available: https://doi.org/10.1145/3278721.3278779