人工智能(AI)多元化危机.pdf
DISCRIMINATING SYSTEMS Gender, Race, and Power in AI Sarah Myers West, AI Now Institute, New York University Meredith Whittaker, AI Now Institute, New York University, Google Open Research Kate Crawford, AI Now Institute, New York University, Microsoft Research APRIL 2019 Cite as: West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute. Retrieved from ainowinstitute/ discriminatingsystems.html.This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License RESEARCH FINDINGS RECOMMENDATIONS INTRODUCTION WHICH HUMANS ARE IN THE LOOP? HOW WORKFORCES AND AI SYSTEMS INTERACT WHO MAKES AI? Diversity Statistics in the AI Industry: Knowns and Unknowns FROM WORKFORCES TO AI SYSTEMS: THE DISCRIMINATION FEEDBACK LOOP CORPORATE DIVERSITY: BEYOND THE PIPELINE PROBLEM Core Themes in Pipeline Research Limitations of Pipeline Research Pipeline Dreams: After Years of Research, The Picture Worsens WORKER-LED INITIATIVES THE PUSHBACK AGAINST DIVERSITY CONCLUSION 3 4 5 8 10 12 15 19 21 23 25 26 28 32 CONTENTSRESEARCH FINDINGS There is a diversity crisis in the AI sector across gender and race. Recent studies found only 18%of authors at leading AI conferences are women, i and more than 80%of AI professors are men. ii This disparity is extreme in the AI industry: iiiwomen comprise only 15% of AI research staff at Facebook and 10% at Google. There is no public data on trans workers or other gender minorities. For black workers, the picture is even worse. For example, only 2.5% of Googles workforce is black, while Facebook and Microsoft are each at 4%. Given decades of concern and investment to redress this imbalance, the current state of the field is alarming. The AI sector needs a profound shift in how it addresses the current diversity crisis. The AI industry needs to acknowledge the gravity of its diversity problem, and admit that existing methods have failed to contend with the uneven distribution of power, and the means by which AI can reinforce such inequality. Further, many researchers have shown that bias in AI systems reflects historical patterns of discrimination. These are two manifestations of the same problem, and they must be addressed together. The overwhelming focus on women in tech is too narrow and likely to privilege white women over others. We need to acknowledge how the intersections of race, gender, and other identities and attributes shape peoples experiences with AI. The vast majority of AI studies assume gender is binary, and commonly assign people as male or female based on physical appearance and stereotypical assumptions, erasing all other forms of gender identity. Fixing the pipeline wont fix AIs diversity problems. Despite many decades of pipeline studies that assess the flow of diverse job candidates from school to industry, there has been no substantial progress in diversity in the AI industry. The focus on the pipeline has not addressed deeper issues with workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization that are causing people to leave or avoid working in the AI sector altogether. The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation. The histories of race science are a grim reminder that race and gender classification based on appearance is scientifically flawed and easily abused. Systems that use physical appearance as a proxy for character or interior states are deeply suspect, including AI tools that claim to detect sexuality from headshots, ivpredict criminality based on facial features, vor assess worker competence via micro-expressions. viSuch systems are replicating patterns of racial and gender bias in ways that can deepen and justify historical inequality. The commercial deployment of these tools is cause for deep concern. i. Element AI. (2019). Global AI Talent Report 2019. Retrieved from jfgagne.ai/talent-2019/. ii. AI Index 2018. (2018). Artificial Intelligence Index 2018. Retrieved from cdn.aiindex/2018/AI%20Index%202018%20Annual%20Report.pdf. iii. Simonite, T. (2018). AI is the future - but where are the women? WIRED. Retrieved from wired/story/artificial-intelligence- researchers-gender-imbalance/. iv. Wang, Y., & Kosinski, M. (2017). Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology. v. Wu, X. and Zhang, X. (2016). Automated Inference on Criminality using Face Images. Retrieved from arxiv/pdf/1611.04135v2.pdf. vi. Rhue, L. (2018). Racial Influence on Automated Perceptions of Emotions. Retrieved from papers.ssrn/sol3/papers.cfm?abstract_ id=3281765. Discriminating Systems: Gender, Race, and Power in AI | Research Findings | 3RECOMMENDATIONS 1. Publish compensation levels, including bonuses and equity, across all roles and job categories, broken down by race and gender. 2. End pay and opportunity inequality, and set pay and benefit equity goals that include contract workers, temps, and vendors. 3. Publish harassment and discrimination transparency reports, including the number of claims over time, the types of claims submitted, and actions taken. 4. Change hiring practices to maximize diversity: include targeted recruitment beyond elite universities, ensure more equitable focus on under-represented groups, and create more pathways for contractors, temps, and vendors to become full-time employees. 5. Commit to transparency around hiring practices, especially regarding how candidates are leveled, compensated, and promoted. 6. Increase the number of people of color, women and other under-represented groups at senior leadership levels of AI companies across all departments. 7. Ensure executive incentive structures are tied to increases in hiring and retention of under- represented groups. 8. For academic workplaces, ensure greater diversity in all spaces where AI research is conducted, including AI-related departments and conference committees. Recommendations for Addressing Bias and Discrimination in AI Systems 9. Remedying bias in AI systems is almost impossible when these systems are opaque. Transparency is essential, and begins with tracking and publicizing where AI systems are used, and for what purpose. 10. Rigorous testing should be required across the lifecycle of AI systems in sensitive domains. Pre-release trials, independent auditing, and ongoing monitoring are necessary to test for bias, discrimination, and other harms. 11. The field of research on bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context. This necessitates including a wider range of disciplinary expertise. 12. The methods for addressing bias and discrimination in AI need to expand to include assessments of whether certain systems should be designed at all, based on a thorough risk assessment. Recommendations for Improving Workplace Diversity Discriminating Systems: Gender, Race, and Power in AI | Recommendations | 4INTRODUCTION There is a diversity crisis in the AI industry, and a moment of reckoning is underway. Over the past few months, employees have been protesting across the tech industry where AI products are created. In April 2019, Microsoft employees met with CEO Satya Nadella to discuss issues of harassment, discrimination, unfair compensation, and lack of promotion for women at the company. 1There are claims that sexual harassment complaints have not been taken seriously enough by HR across the industry. 2And at Google, there was an historic global walkout in November 2018 of 20,000 employees over a culture of inequity and sexual harassment inside the company, triggered by revelations that Google had paid $90m to a male executive accused of serious misconduct. 3This is just one face of the diversity disaster that now reaches across the entire AI sector. The statistics for both gender and racial diversity are alarmingly low. For example, women comprise 15% of AI research staff at Facebook and just 10% at Google. 4Its not much better in academia, with recent studies showing only 18% of authors at leading AI conferences are women, 5and more than 80% of AI professors are male. 6For black workers, the picture is worse. For example, only 2.5% of Googles workforce is black, 7while Facebook and Microsoft are each at 4%. 8,9We have no data on trans workers or other gender minorities. Given decades of concern and investment to redress the imbalances, the current state of the field is alarming. The diversity problem is not just about women. Its about gender, race, and most fundamentally, about power. 10It affects how AI companies work, what products get built, who they are designed to serve, and who benefits from their development. This report is the culmination of a year-long pilot study examining the scale of AIs current diversity crisis and possible paths forward. This report draws on a thorough review of existing literature and current research working on issues of gender, race, class, and artificial intelligence. The review was purposefully scoped to encompass a variety of disciplinary and methodological perspectives, incorporating literature from computer science, the social sciences, and humanities. It represents the first stage of a multi-year project examining the intersection of gender, race, and power in AI, and will be followed by further studies and research articles on related issues. 1 Tiku, N. (2019, Apr. 4). Microsoft Employees Protest Treatment of Women to CEO Nadella. WIRED. Retrieved from wired/story/ microsoft-employees-protest-treatment-women-ceo-nadella/. 2 Gershgorn, D. (2019, Apr. 4). Amid employee uproar, Microsoft is investigating sexual harassment claims overlooked by HR. Quartz. Retrieved from qz/1587477/microsoft-investigating-sexual-harassment-claims-overlooked-by-hr/. 3 Statt, N. (2018, Nov. 2). Over 20,000 Google employees participated in yesterdays mass walkout. The Verge. Retrieved from theverge. com/2018/11/2/18057716/google-walkout-20-thousand-employees-ceo-sundar-pichai-meeting. 4 Simonite, T. (2018). AI is the future - but where are the women? WIRED. Retrieved from wired/story/artificial-intelligence- researchers-gender-imbalance/. 5 Element AI. (2019). Global AI Talent Report 2019. Retrieved from jfgagne.ai/talent-2019/. 6 AI Index 2018. (2018). Artificial Intelligence Index 2018. Retrieved from cdn.aiindex/2018/AI%20Index%202018%20Annual%20Report.pdf. 7 Google. (2018). Google Diversity Annual Report 2018. Retrieved from static.googleusercontent/media/diversity.google/en/static/pdf/ Google_Diversity_annual_report_2018.pdf. 8 Williams, M. (2018, July 12). Facebook 2018 Diversity Report: Reflecting on Our Journey. Retrieved from newsroom.fb/news/2018/07/ diversity-report/ 9 Microsoft. (2019). Diversity & Inclusion. Retrieved from microsoft/en-us/diversity/default.aspx. 10 As authors of this report, we feel its important to acknowledge that, as white women, we dont experience the intersections of oppression in the same way that people of color and gender minorities, among others, do. But the silence of those who experience privilege in this space is the problem: this is in part why progress on diversity issues moves so slowly. It is important that those of us who do work in this space address these issues openly, and act to center the communities most affected. Discriminating Systems: Gender, Race, and Power in AI | Introduction | 5To date, the diversity problems of the AI industry and the issues of bias in the systems it builds have tended to be considered separately. But we suggest that these are two versions of the same problem: issues of discrimination in the workforce and in system building are deeply intertwined. Moreover, tackling the challenges of bias within technical systems requires addressing workforce diversity, and vice versa. Our research suggests new ways of understanding the relationships between these complex problems, which can open up new pathways to redressing the current imbalances and harms. From a high-level view, AI systems function as systems of discrimination: they are classification technologies that differentiate, rank, and categorize. But discrimination is not evenly distributed. A steady stream of examples in recent years have demonstrated a persistent problem of gender and race-based discrimination (among other attributes and forms of identity). Image recognition technologies miscategorize black faces, 11sentencing algorithms discriminate against black defendants, 12chatbots easily adopt racist and misogynistic language when trained on online discourse, 13and Ubers facial recognition doesnt work for trans drivers. 14In most cases, such bias mirrors and replicates existing structures of inequality in society. In the face of growing evidence, the AI research community, and the industry producing AI products, has begun addressing the problem of bias by building on a body of work on fairness, accountability, and transparency. This work has commonly focused on adjusting AI systems in ways that produce a result deemed “fair” by one of various mathematical definitions. 15Alongside this, we see growing calls for ethics in AI, corporate ethics boards, and a push for more ethical AI development practices. 16 But as the focus on AI bias and ethics grows, the scope of inquiry should expand to consider not only how AI tools can be biased technically, but how they are shaped by the environments in which they are built and the people that build them. By integrating these concerns, we can develop a more accurate understanding of how AI can be developed and employed in ways that are fair and just, and how we might be able to ensure both. Currently, large scale AI systems are developed almost exclusively in a handful of technology companies and a small set of elite university laboratories, spaces that in the West tend to be extremely white, affluent, technically oriented, and male. 17These are also spaces that have a history of problems of discrimination, exclusion, and sexual harassment. As Melinda Gates describes, “men who demean, degrade or disrespect women have been able to operate with such impunitynot just in Hollywood, but in tech, venture capital, and other spaces where 11 Alcine, J. (2015). Twitter. Retrieved from twitter/jackyalcine/status/615329515909156865. 12 Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016, May 3). Machine Bias. ProPublica, propublica.or