欢迎来到报告吧! | 帮助中心 分享价值,成长自我!

报告吧

换一换
首页 报告吧 > 资源分类 > PDF文档下载
 

纽约大学 AI Now 2017年度报告.pdf

  • 资源ID:2537       资源大小:799.37KB        全文页数:37页
  • 资源格式: PDF        下载积分:15金币 【人民币15元】
快捷下载 游客一键下载
会员登录下载
三方登录下载: 微信开放平台登录 QQ登录  
下载资源需要15金币 【人民币15元】
邮箱/手机:
温馨提示:
用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,下载共享资源
 
友情提示
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,既可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

纽约大学 AI Now 2017年度报告.pdf

AI Now 2017 Report  Authors Alex Campolo, New York University Madelyn Sanfilippo, New York University Meredith Whittaker, Google Open Research, New York University, and AI Now Kate Crawford, Microsoft Research, New York University, and AI Now EditorsAndrew Selbst, Yale Information Society Project and Data & Society Solon Barocas, Cornell University Table of Contents Recommendations 1 Executive Summary 3 Introduction 6 Labor and Automation 7 Research by Sector and Task 7 AI and the Nature of Work 9 Inequality and Redistribution 13 Bias and Inclusion 13 Where Bias Comes From 14 The AI Field is Not Diverse 16 Recent Developments in Bias Research 18 Emerging Strategies to Address Bias 20 Rights and Liberties 21 Population Registries and Computing Power 22 Corporate and Government Entanglements 23 AI and the Legal System 26 AI and Privacy 28 Ethics and Governance 30 Ethical Concerns in AI 30 AI Reflects Its Origins 31 Ethical Codes 32 Challenges and Concerns Going Forward 34 Conclusion 36  AI Now 2017 Report 1 Recommendations These recommendations reflect the views and research of the AI Now Institute at New York University. We thank the experts who contributed to the AI Now 2017 Symposium and Workshop for informing these perspectives, and our research team for helping shape the AI Now 2017 Report. 1. Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use “black box” AI and algorithmic systems. This includes the unreviewed or unvalidated use of pre-trained models, AI systems licensed from third party vendors, and algorithmic processes created in-house. The use of such systems by public agencies raises serious due process concerns, and at a minimum they should be available for public auditing, testing, and review, and subject to accountability standards. 2. Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design. As this is a rapidly changing field, the methods and assumptions by which such testing is conducted, along with the results, should be openly documented and publicly available, with clear versioning to accommodate updates and new findings. 3. After releasing an AI system, companies should continue to monitor its use across different contexts and communities. The methods and outcomes of monitoring should be defined through open, academically rigorous processes, and should be accountable to the public. Particularly in high stakes decision-making contexts, the views and experiences of traditionally marginalized communities should be prioritized. 4. More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR. This research will complement the existing focus on worker replacement via automation. Specific attention should be given to the potential impact on labor rights and practices, and should focus especially on the potential for behavioral manipulation and the unintended reinforcement of bias in hiring and promotion. 5. Develop standards to track the provenance, development, and use of training datasets throughout their life cycle. This is necessary to better understand and monitor issues of bias and representational skews. In addition to developing better records for how a training dataset was created and maintained, social scientists and measurement researchers within the AI bias research field should continue to examine existing training datasets, and work to understand potential blind spots and biases that may already be at work. AI Now 2017 Report 2 6. Expand AI bias research and mitigation strategies beyond a narrowly technical approach. Bias issues are long term and structural, and contending with them necessitates deep interdisciplinary research. Technical approaches that look for a one-time “fix” for fairness risk oversimplifying the complexity of social systems. Within each domain such as education, healthcare or criminal justice legacies of bias and movements toward equality have their own histories and practices. Legacies of bias cannot be “solved” without drawing on domain expertise. Addressing fairness meaningfully will require interdisciplinary collaboration and methods of listening across different disciplines. 7. Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed. Creating such standards will require the perspectives of diverse disciplines and coalitions. The process by which such standards are developed should be publicly accountable, academically rigorous and subject to periodic review and revision. 8. Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development. Many now recognize that the current lack of diversity in AI is a serious issue, yet there is insufficiently granular data on the scope of the problem, which is needed to measure progress. Beyond this, we need a deeper assessment of workplace cultures in the technology industry, which requires going beyond simply hiring more women and minorities, toward building more genuinely inclusive workplaces. 9. The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power. As AI moves into diverse social and institutional domains, influencing increasingly high stakes decisions, efforts must be made to integrate social scientists, legal scholars, and others with domain expertise that can guide the creation and integration of AI into long-standing systems with established practices and norms. 10. Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms. More work is needed on how to substantively connect high level ethical principles and guidelines for best practices to everyday development processes, promotion and product release cycles. AI Now 2017 Report 3 Executive Summary Artificial intelligence (AI) technologies are in a phase of rapid development, and are being adopted widely. While the concept of artificial intelligence has existed for over sixty years, real-world applications have only accelerated in the last decade due to three concurrent developments: better algorithms, increases in networked computing power and the tech industrys ability to capture and store massive amounts of data. AI systems are already integrated in everyday technologies like smartphones and personal assistants, making predictions and determinations that help personalize experiences and advertise products. Beyond the familiar, these systems are also being introduced in critical areas like law, finance, policing and the workplace, where they are increasingly used to predict everything from our taste in music to our likelihood of committing a crime to our fitness for a job or an educational opportunity. AI companies promise that the technologies they create can automate the toil of repetitive work, identify subtle behavioral patterns and much more. However, the analysis and understanding of artificial intelligence should not be limited to its technical capabilities. The design and implementation of this next generation of computational tools presents deep normative and ethical challenges for our existing social, economic and political relationships and institutions, and these changes are already underway. Simply put, AI does not exist in a vacuum. We must also ask how broader phenomena like widening inequality, an intensification of concentrated geopolitical power and populist political movements will shape and be shaped by the development and application of AI technologies. Building on the inaugural 2016 report, The AI Now 2017 Report addresses the most recent scholarly literature in order to raise critical social questions that will shape our present and near future. A year is a long time in AI research, and this report focuses on new developments in four areas: labor and automation, bias and inclusion, rights and liberties, and ethics and governance. We identify emerging challenges in each of these areas and make recommendations to ensure that the benefits of AI will be shared broadly, and that risks can be identified and mitigated. Labor and automation: Popular media narratives have emphasized the prospect of mass job loss due to automation and the widescale adoption of robots. Such serious scenarios deserve sustained empirical attention, but some of the best recent work on AI and labor has focused instead on specific sectors and tasks. While few jobs will be completely automated in the near term, researchers estimate that about a third of workplace tasks can be automated for the majority of workers. New policies such as the Universal Basic Income (UBI) are being designed to address concerns about job loss, but these need much more study. An underexplored area that needs urgent attention is how AI and related algorithmic systems are already changing the balance of workplace power. Machine learning techniques are quickly being integrated into management and hiring AI Now 2017 Report 4 decisions, including in the so-called gig economy where technical systems match workers with jobs, but also across more traditional white collar industries. New systems make promises of flexibility and efficiency, but they also intensify the surveillance of workers, who often do not know when and how they are being tracked and evaluated, or why they are hired or fired. Furthermore, AI-assisted forms of management may replace more democratic forms of bargaining between workers and employers, increasing owner power under the guise of technical neutrality. Bias and inclusion: One of the most active areas of critical AI research in the past year has been the study of bias, both in its more formal statistical sense and in the wider legal and normative senses. At their best, AI systems can be used to augment human judgement and reduce both our conscious and unconscious biases. However, training data, algorithms, and other design choices that shape AI systems may reflect and amplify existing cultural assumptions and inequalities. For example, natural language processing techniques trained on a corpus of internet writing from the 1990s may reflect stereotypical and dated word associationsthe word “female” might be associated with “receptionist.” If these models are used to make educational or hiring decisions, they may reinforce existing inequalities, regardless of the intentions or even knowledge of systems designers. Those researching, designing and developing AI systems tend to be male, highly educated and very well paid. Yet their systems are working to predict and understand the behaviors and preferences of diverse populations with very different life experiences. More diversity within the fields building these systems will help ensure that they reflect a broader variety of viewpoints. Rights and liberties: The application of AI systems in public and civil institutions is challenging existing political arrangements, especially in a global political context shaped by events such as the election of Donald Trump in the United States. A number of governmental agencies are already partnering with private corporations to deploy AI systems in ways that challenge civil rights and liberties. For example, police body camera footage is being used to train machine vision algorithms for law enforcement, raising privacy and accountability concerns. AI technologies are als

注意事项

本文(纽约大学 AI Now 2017年度报告.pdf)为本站会员(蓝坏眼睛)主动上传,报告吧仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知报告吧(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。




关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

copyright@ 2017-2022 报告吧 版权所有
经营许可证编号:宁ICP备17002310号 | 增值电信业务经营许可证编号:宁B2-20200018  | 宁公网安备64010602000642号


收起
展开