欢迎来到报告吧! | 帮助中心 分享价值,成长自我!

报告吧

换一换
首页 报告吧 > 资源分类 > PDF文档下载
 

AI_Now_2019_Report.pdf

  • 资源ID:101697       资源大小:1.37MB        全文页数:100页
  • 资源格式: PDF        下载积分:9.8金币 【人民币9.8元】
快捷下载 游客一键下载
会员登录下载
三方登录下载: 微信开放平台登录 QQ登录  
下载资源需要9.8金币 【人民币9.8元】
邮箱/手机:
温馨提示:
用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)
支付说明:
本站最低充值10金币,下载本资源后余额将会存入您的账户,您可在我的个人中心查看。
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,下载共享资源
 
友情提示
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,既可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

AI_Now_2019_Report.pdf

/AI Now 2019 Report | 2 AUTHORS AND CONTRIBUTORS Kate Crawford , AI Now Institute, New York University, Microsoft Research Roel Dobbe , AI Now Institute, New York University Theodora Dryer , AI Now Institute, New York University Genevieve Fried , AI Now Institute, New York University Ben Green , AI Now Institute, New York University Elizabeth Kaziunas , AI Now Institute, New York University Amba Kak , AI Now Institute, New York University Varoon Mathur , AI Now Institute, New York University Erin McElroy , AI Now Institute, New York University Andrea Nill Sánchez , AI Now Institute, New York University Deborah Raji , AI Now Institute, New York University Joy Lisi Rankin , AI Now Institute, New York University Rashida Richardson , AI Now Institute, New York University Jason Schultz , AI Now Institute, New York University School of Law Sarah Myers West , AI Now Institute, New York University Meredith Whittaker , AI Now Institute, New York University With research assistance from Alejandro Calcaño Bertorelli and Joan Greenbaum (AI NowInstitute, New York University)DECEMBER 2019 Cite as: Crawford, Kate, Roel Dobbe, Theodora Dryer, Genevieve Fried, Ben Green, Elizabeth Kaziunas, Amba Kak, Varoon Mathur, Erin McElroy, Andrea Nill Sánchez, Deborah Raji, Joy Lisi Rankin, Rashida Richardson, Jason Schultz, Sarah Myers West, and Meredith Whittaker. AI Now 2019 Report . New York: AI Now Institute, 2019, ainowinstitute/AI_Now_2019_Report.html . This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License /AI Now 2019 Report | 3 TABLE OF CONTENTS ABOUT THE AI NOW INSTITUTE 5 RECOMMENDATIONS 6 EXECUTIVE SUMMARY 10 1. THE GROWING PUSHBACK AGAINST HARMFUL AI 14 1.1 AI, Power, and Control 14Worker Productivity, AI, and “The Rate” 14Algorithmic Wage Control 15AI in Hiring Tech 17Labor Automations Disparate Impacts 18The Limits of Corporate AI Ethics 19How AI Companies Are Inciting Geographic Displacement 231.2 Organizing Against and Resisting Consolidations of Power 24Organizing and Pushback 24Community Organizing 25Worker Organizing 27Student Organizing 301.3 Law and Policy Responses 31Data Protection as the Foundation of the Majority of AI Regulatory Frameworks 31Biometric Recognition Regulation 32Algorithmic Accountability and Impact Assessments 33Experimentation with Task Forces 34Litigation Is Filling Some of the Void 352. EMERGING AND URGENT CONCERNS IN 2019 36 2.1 The Private Automation of Public Infrastructure 36AI and Neighborhood Surveillance 36Smart Cities 37AI at the Border 39National Biometric Identity Systems 40China AI Arms Race Narrative 422.2 From “Data Colonialism” to Colonial Data 43The Abstraction of “Data Colonialism” and Context Erasure 43Colonial Data: Statistics and Indigenous Data Sovereignty 442.3 Bias Built In 45/AI Now 2019 Report | 4 2.4 AI and the Climate Crisis 47AI Makes Tech Dirtier 47AI and the Fossil Fuel Industry 48Opacity and Obfuscation 492.5 Flawed Scientic Foundations 49Facial/Affect Recognition 50Face Datasets 522.6 Health 52The Expanding Scale and Scope of Algorithmic Health Infrastructures 53New Social Challenges for the Healthcare Community 542.7 Advances in the Machine Learning Community 55The Tough Road Toward Sociotechnical Perspectives 55Confronting AIs Inherent Vulnerabilities 57CONCLUSION 58 ENDNOTES 60 /AI Now 2019 Report | 5 ABOUT THE AI NOW INSTITUTE The AI Now Institute at New York University is an interdisciplinary research institute dedicated tounderstanding the social implications of AI technologies. It is the rst university research centerfocused specically on AIs social signicance. Founded by Kate Crawford and Meredith Whittakerin 2017, AI Now is one of the few women-led AI institutes in the world.AI Now works with a broad coalition of stakeholders, including academic researchers, industry,civil society, policymakers, and impacted communities, to understand and address issues raisedby the rapid introduction of AI across core social domains. AI Now produces interdisciplinaryresearch to help ensure that AI systems are accountable to the communities and contexts theyare meant to serve, and that they are applied in ways that promote justice and equity. TheInstitutes current research agenda focuses on four core areas: bias and inclusion, rights andliberties, labor and automation, and safety and critical infrastructure.Our most recent publications include: Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, PredictivePolicing Systems, and Justice, an article on how “dirty-policing” practices and policiesshape the environment and the methodology by which data is created, raising the risk ofcreating inaccurate, skewed, or systematically biased “dirty data.” Anatomy of an AI System, a large-scale map and longform essay produced in partnershipwith SHARE Lab, which investigates the human labor, data, and planetary resourcesrequired to operate an Amazon Echo. Discriminating Systems: Gender, Race, and Power in AI, a report that examines howdiscrimination and inequality in the AI sector are replicated in AI technology and offersrecommendations for change. Disability, Bias, and AI, drawing on a wealth of research from disability advocates andscholars, this report examines what disability studies and activism can tell us about therisks and possibilities of AI. Excavating AI, an essay on the politics of images in machine learning training sets. Litigating Algorithms 2019 US Report: New Challenges to Government Use ofAlgorithmic Decision Systems, our second major report assessing recent court casesfocused on government use of algorithms.We also host expert workshops and public events on a wide range of topics. Our annual public AINow Symposium convenes leaders from academia, industry, government, and civil society toexamine the biggest challenges we face as AI moves into our everyday lives. Recordings of theprogram are available online.More information is available at ainowinstitute/AI Now 2019 Report | 6 RECOMMENDATIONS 1. Regulators should ban the use of affect recognition in important decisions that impactpeoples lives and access to opportunities. Until then, AI companies should stopdeploying it. Given the contested scientic foundations of affect recognition technologya subclass of facial recognition that claims to detect things such aspersonality, emotions, mental health, and other interior statesit should not be allowed toplay a role in important decisions about human lives, such as who is interviewed or hiredfor a job, the price of insurance, patient pain assessments, or student performance in school. Building on last years recommendation for stringent regulation, governmentsshould specically prohibit use of affect recognition in high-stakes decision-makingprocesses.2. Government and business should halt all use of facial recognition in sensitive socialand political contexts until the risks are fully studied and adequate regulations are inplace. In 2019, there has been a rapid expansion of facial recognition in many domains.Yet there is mounting evidence that this technology causes serious harm, most often topeople of color and the poor. There should be a moratorium on all uses of facialrecognition in sensitive social and political domainsincluding surveillance, policing,education, and employmentwhere facial recognition poses risks and consequences thatcannot be remedied retroactively. Lawmakers must supplement a moratorium with (1)transparency requirements that allow researchers, policymakers, and communities toassess and understand the best possible approach to restricting and regulating facialrecognition; and (2) protections that provide the communities on whom such technologiesare used with the power to make their own evaluations and rejections of its deployment.3. The AI industry needs to make signicant structural changes to address systemicracism, misogyny, and lack of diversity. The AI industry is strikingly homogeneous, due in large part to its treatment of women, people of color, gender minorities, and otherunderrepresented groups. To begin addressing this problem, more information should beshared publicly about compensation levels, response rates to harassment anddiscrimination, and hiring practices. It also requires ending pay and opportunity inequalityand providing real incentives for executives to create, promote, and protect inclusiveworkplaces. Finally, any measures taken should address the two-tiered workforce, in which many of the people of color at tech companies work as undercompensated andvulnerable temporary workers, vendors, or contractors.4. AI bias research should move beyond technical xes to address the broader politicsand consequences of AIs use. Research on AI bias and fairness has begun to expandbeyond technical solutions that target statistical parity, but there needs to be a much more/AI Now 2019 Report | 7 rigorous examination of AIs politics and consequences, including close attention to AIsclassication practices and harms. This will require that the eld center “non-technical”disciplines whose work traditionally examines such issues, including science andtechnology studies, critical race studies, disability studies, and other disciplines keenlyattuned to social context, including how difference is constructed, the work ofclassication, and its consequences.5. Governments should mandate public disclosure of the AI industrys climate impact.Given the signicant environmental impacts of AI development, as well as theconcentration of power in the AI industry, it is important for governments to ensure thatlarge-scale AI providers disclose the climate costs of AI development to the public. As withsimilar requirements for the automotive and airline industries, such disclosure helpsprovide the foundation for more informed collective choices around climate andtechnology. Disclosure should include notications that allow developers and researchersto understand the specic climate cost of their use of AI infrastructure. Climate-impactreporting should be separate from any accounting for offsets or other mitigation strategies. In addition, governments should use that data to ensure that AI policies takeinto account the climate impacts of any proposed AI deployment.6. Workers should have the right to contest exploitative and invasive AIand unions canhelp. The introduction of AI-enabled labor-management systems raises signicantquestions about worker rights and safety. The use of these systemsfrom Amazon warehouses to Uber and InstaCartpools power and control in the hands of employersand harms mainly low-wage workers (who are disproportionately people of color) bysetting productivity targets linked to chronic injuries, psychological stress, and even deathand by imposing unpredictable algorithmic wage cuts that undermine economic stability.Workers deserve the right to contest such determinations, and to collectively agree on workplace standards that are safe, fair, and predictable. Unions have traditionally been an important part of this process, which underscores the need for companies to allow theirworkers to organize without fear of retaliation.7. Tech workers should have the right to know what they are building and to contestunethical or harmful uses of their work. Over the last two years, organized tech workersand whistleblowers have emerged as a powerful force for AI accountability, exposingsecretive contracts and plans for harmful products, from autonomous weapons totracking-and-surveillance infrastructure. Given the general-purpose nature of most AItechnology, the engineers designing and developing a system are often unaware of how itwill ultimately be used. An object-recognition model trained to enable aerial surveillancecould just as easily be applied to disaster relief as it could to weapons targeting. Toooften, decisions about how AI is used are left to sales departments and executives, hidden behind highly condential contractual agreements that are inaccessible to workers andthe public. Companies should ensure that workers are able to track where their work is/AI Now 2019 Report | 8 being applied, by whom, and to what end. Providing such information enables workers tomake ethical choices and gives them power to collectively contest harmful applications.8. States should craft expanded biometric privacy laws that regulate both public andprivate actors. Biometric data, from DNA to faceprints, is at the core of many harmful AIsystems. Over a decade ago, Illinois adopted the Biometric Information Privacy Act (BIPA),which has now become one of the strongest and most effective privacy protections in theUnited States. BIPA allows individuals to sue for almost any unauthorized collection anduse of their biometric data by a private actor, including for surveillance, tracking, andproling via facial recognition. BIPA also shuts down the gray and black markets that selldata and make it vulnerable to breaches and exploitation. States that adopt BIPA shouldexpand it to include government use, which will mitigate many of biometric AIs harms,especially in parallel with other approaches, such as moratoriums and prohibitions.9. Lawmakers need to regulate the integration of public and private surveillanceinfrastructures. This year, there was a surge in the integration of privately ownedtechnological infrastructures with public systems, from “smart” cities to property tech toneighborhood surveillance systems such as Amazons Ring and Rekognition. Large techcompanies like Amazon, Microsoft, and Google also pursued major military andsurveillance contracts, further enmeshing those interests. Across Asia, Africa, and Latin America, multiple governments continue to roll out biometric ID projects that create theinfrastructure for both state and commercial surveillance. Yet few regulatory regimesgovern this intersection. We need strong transparency, accountability, and oversight in these areas, such as recent efforts to mandate public disclosure and debate ofpublic-private tech partnerships, contracts, and acquisitions.1 10. Algorithmic Impact Assessments must account for AIs impact on climate, health, andgeographical displacement. Algorithmic Impact Assessments (AIAs)2 help governments,companies, and communities assess the social implications of AI, and determine whetherand how to use AI systems. Those using AIAs should expand them so that in addition toconsidering issues of bias, discrimination, and due process, the isues of climate, health,and geographical displacement are included.11. Machine learning researchers should account for potential risks and harms and betterdocument the origins of their models and data. Advances in understanding of bias,fairness, and justice in machine learning research make it clear that assessments of risksand harms are imperative. In addition, using new mechanisms for documenting dataprovenance and the specicities of individual mach

注意事项

本文(AI_Now_2019_Report.pdf)为本站会员(小贝)主动上传,报告吧仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知报告吧(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。




关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

copyright@ 2017-2022 报告吧 版权所有
经营许可证编号:宁ICP备17002310号 | 增值电信业务经营许可证编号:宁B2-20200018  | 宁公网安备64010602000642号


收起
展开