欢迎来到报告吧! | 帮助中心 分享价值,成长自我!

报告吧

换一换
首页 报告吧 > 资源分类 > PDF文档下载
 

可解释的人工智能:以用户为核心(英文版).pdf

  • 资源ID:117289       资源大小:1.54MB        全文页数:34页
  • 资源格式: PDF        下载积分:15金币 【人民币15元】
快捷下载 游客一键下载
会员登录下载
三方登录下载: 微信开放平台登录 QQ登录  
下载资源需要15金币 【人民币15元】
邮箱/手机:
温馨提示:
用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,下载共享资源
 
友情提示
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,既可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

可解释的人工智能:以用户为核心(英文版).pdf

Explainable AI: Putting the user at the core Disclaimer: This document includes references to third-party products/services. This is to give a sense of current market developments and are not endorsements of these offerings. 2020 Association of Chartered Certified Accountants February 2020 About ACCA ACCA (the Association of Chartered Certified Accountants) is the global body for professional accountants, offering business- relevant, first-choice qualifications to people of application, ability and ambition around the world who seek a rewarding career in accountancy, finance and management. ACCA supports its 219,000 members and 527,000 students (including affiliates) in 179 countries, helping them to develop successful careers in accounting and business, with the skills required by employers. ACCA works through a network of 110 offices and centres and 7,571 Approved Employers worldwide, and 328 approved learning providers who provide high standards of learning and development. Through its public interest remit, ACCA promotes appropriate regulation of accounting and conducts relevant research to ensure accountancy continues to grow in reputation and influence. ACCA has introduced major innovations to its flagship qualification to ensure its members and future members continue to be the most valued, up to date and sought-after accountancy professionals globally. Founded in 1904, ACCA has consistently held unique core values: opportunity, diversity, innovation, integrity and accountability. More information is here: Explainable AI: Putting the user at the core About this report This report shines a light on explainable AI and its implications for accountancy and finance professionals. FOR FURTHER INFORMATION: Narayanan Vaidyanathan Head of Business Insights, ACCA Contents Executive summary 5 Introduction 7 1. Why explainability matters for accountancy and finance professionals 9 2. The explainability challenge 13 3. Approaches to explainability 15 4. Incorporating explainability into the agenda 19 5. Explainability in practice 23 Conclusion 32 References 33 5 Executive summary Explainable artificial intelligence (XAI) emphasises the role of the algorithm not just in providing an output, but also in sharing with the user the supporting information on how the system reached a particular conclusion. XAI approaches aim to shine a light on the algorithms inner workings and/or to reveal some insight into the factors that influenced its output. Furthermore, the idea is for this information to be available in a user-readable way, rather than being hidden within code. Explainable AI: Putting the user at the core | Executive summary Historically, the focus of research within AI has been on developing and iteratively improving complex algorithms, with the aim of improving accuracy. Implicitly, therefore, the attention has been on refining the quality of the answer, rather than explaining the answer. But as AI is maturing, the latter is becoming increasingly important for enterprise adoption. This is both for decision making within a business, and post-fact audit of decisions made. Auditable algorithms are essentially ones that are explainable. The complexity, speed and volume of AI decision-making obscure what is going on in the background, the so-called black box effect, which makes the model difficult to interrogate. Explainability, or any deficit thereof, affects the ability of professional accountants to display scepticism. In a recent survey of members of ACCA and IMA (Institute of Management Accountants), those agreeing with this view, 54%, were more than twice the number who disagreed. It is an area that is relevant to being able to trust the technology, and to being confident that it is being used ethically. XAI can help in this scenario with techniques to improve explainability. It may be helpful to think of it as a design principle as much as a set of tools. This is AI designed to augment the human ability to understand and interrogate the results returned by the model. The purpose of this report is to address explainability from the perspective of practitioners, ie accountancy and finance professionals. For practitioners, explainability can improve the ability to assess the claims made by vendors for their marketed applications, enhance value captured from AI that is already in use, boost return on investment (ROI) from AI investments; and augment audit and assurance capabilities, where data is managed using AI tools. Key messages for practitioners Maintain awareness of evolving trends in AI: 51% of survey respondents were unaware of XAI, which impairs their ability to engage with the technology. To raise awareness, this report sets out some of the key developments in this emerging area. Beware of oversimplified narratives: in accountancy, AI is neither fully autonomous nor a complete fantasy. The middle path of augmenting, as opposed to replacing, the human actor works best when the user understands what the AI is doing; this needs explainability. Embed explainability into enterprise adoption: consider the level of explainability needed, and how it can help with model performance, ethical use and legal compliance. Policymakers, for instance in government or in regulatory bodies, frequently hear the developer/supplier perspective from the AI industry. This report can complement that with a view from the user/demand side, so that policy can incorporate consumer needs. Key messages for policymakers Explainability empowers consumers and regulators: improved explainability reduces the deep asymmetry between experts who understand AI, and the wider public. And for regulators, it can help reduce systemic risk if there is a better understanding of factors influencing algorithms that are being increasingly deployed across the marketplace. Emphasise explainability as a design principle: an environment that balances innovation and regulation can be achieved by supporting industry to continue, indeed redouble, its efforts to include explainability as a core feature in product development. The middle path of augmenting, as opposed to replacing, the human actor works best when the user understands what the AI is doing; this needs explainability. 6 7 1 For an introduction to AI for accountants, see ACCAs CPD course Machine learning: an introduction for finance professionals, . Artificial intelligence (AI) offers the capacity for machines to learn through exposure to examples and data, and to use that learning to drive inferences and decision-making (ACCA 2019). 1 This is a step beyond automation, and additionally involves cognition. Cognition provides a value layer that opens up new insights, while automation provides an efficiency layer to reduce the costs of doing so. Introduction Explainable AI: Putting the user at the core | Introduction There is also a second important reason why AI is featuring so prominently in our collective consciousness, namely that it is a general purpose technology (GPT). This means that it has the power and relevancy to reimagine, beyond incremental effects, our entire way of living. That contrasts with, say, shipping containers, which were a clever innovation, but pertained specifically to the transport and logistics industry. The arrival of electricity at the turn of the twentieth century is a better parallel. It is not just a technology it is an enabler that flows through every aspect of life whether professional or personal. AI will probably have a similar impact. WHAT IS EXPLAINABILITY? Broadly speaking, to explain an AI algorithm means to be able to shine a light on its inner workings and/or to reveal some insight on what factors influenced its output, and to what extent, and for this information to be human- readable, ie not hidden within impenetrable lines of code. Strictly speaking, interpretability is referred to as the ability to see inside a model transparently and understand its working, while explainability relates to situations where the model approach has to be inferred, rather than directly observed, because it is an opaque black box. This report, being aimed at users, will use explainability to mean quite simply an understanding of how/why a model returns the results it does. Explainability matters for reasons that trace back to why AI is a different kind of technology. That it is cognitive means that it can be non-trivial and easy to get wrong, given the complexities involved, and explainability is a checks-and- balances mechanism. That it is a GPT means that explaining it cannot be relegated to a secondary or tertiary priority. Doing so can create serious risks for the public interest. Errors could range from honest mistakes to more sinister questions of incentive. Has the AI performed in a certain way because ulterior motives were at play in its design or use? The public interest for greater explainability is intensified by the extreme asymmetry of understanding between those in the know, and the public at large. Algorithms can be opaque and XAI can help to keep up with the scale and real-time decision making of AI. This is an emerging field and one that is expected to be a key focus in coming years, in order for AI to achieve mainstream use on a large scale. Compared with the widespread use of mature everyday technologies, we are still in the early stages of AI adoption. Human systems and structures have an opportunity to use AI in a way that places the public good at the heart of its future development. This requires a mix of technological understanding, strategic decision making, governance mechanisms and agile delivery across multiple domains of subject-matter expertise all underpinned by the highest standards of ethical behaviour. Explainability will be a central aspect of connecting all these elements. Broadly speaking, to explain an AI algorithm means to be able to shine a light on its inner workings and/or to reveal some insight on what factors influenced its output, and to what extent. 8 9 AI can be polarising, with some people having unrealistic expectations that it will be like magic and answer all problems, while others are deeply suspicious of what the algorithms are doing in the background. XAI seeks to bridge this gap, by improving understanding to manage unrealistic expectations, and to give a level of comfort and clarity to the doubters. 1. Why explainability matters for accountancy and finance professionals FIGURE 1.1: Awareness of XAI N= 1,063 ACCA and IMA members around the world Im aware of XAI or explainable AI 25% Im aware of the black- box issue with AI algorithms, but havent heard of XAI or AI explainability 24% Im not aware of XAI or AI explainability 51% Explainable AI: Putting the user at the core | 1. Why explainability matters for accountancy and finance professionals A survey of ACCA members conducted in November 2019 revealed that more than half of respondents were not aware of explainability as a focus of attention within the AI industry (Figure 1.1). Increasing awareness can improve the ability of accountancy and finance professionals to ask the right questions about AI products in the market and those in use within their organisations. All the factors involving the public interest and the wider case for explainability apply, but it is worth additionally reflecting on why explainability matters for accountancy and finance professionals in particular. ADOPTION ENGAGING WITH AI Professional accountants frequently refer to the concept of scepticism as a Pole Star to guide their ability to serve their organisations. Scepticism involves the ability to ask the right questions, to interrogate the responses, to delve deeper into particular areas if needed and to apply judgement in deciding whether one is satisfied with the information as presented. More than twice as many survey respondents agreed than disagreed that explainability does have relevance when trying to display scepticism as a professional accountant (Figure 1.2). Increasing awareness can improve the ability of accountancy and finance professionals to ask the right questions about AI products in the market and those in use within their organisations. 10 FIGURE 1.2: AI explainability affects the ability of professional accountants to display scepticism N = 269, Im aware of XAI or explainable AI 25% 24% 51% 0% 10% 20% 30% 40% Strongly disagree Slightly disagree Neither agree or disagree Slightly agree Strongly agree Dont know NET DISAGREE: 24% NET AGREE: 56% Explainable AI: Putting the user at the core | 1. Why explainability matters for accountancy and finance professionals XAI can provide a record/evidence or illustration of the basis on which the algorithm has operated. For AI to be auditable, it needs to incorporate principles of explainability. This provides an important foundation for adoption, whereas an opaque system in which the technology cannot be interrogated limits the ability to use model outputs. Thats no longer a realistic position to take. Moreover, establishing the ROI of adoption will be an important consideration for any organisation. And better explainability drives these returns because users no longer just wait to see what the model says, but have a more precise understanding of how the model can be used to drive specific business outcomes. IMPACT USE AT SCALE The mathematics underlying AI models is theoretically well tested and has been understood for decades, if not longer, and converting it to production-ready models is a core task of data scientists. For accountancy and finance professionals, having an appreciation of the model they are using is essential, but their particular interest is scaling up its use to enterprise level, because this is the point at which the theory becomes reality. Scaling up presents challenges for deriving value from the model owing to the volume and variety of additional data, and the noise that comes with it, to all of which the algorithm is exposed. Greater explainability could help finance professionals understand where a model might struggle when production is scaled up. A recognised risk with AI algorithms is that of over-fitting. This means that the model works very well with the training data, ie the historical data set chosen to train the algorithm, but then struggles to generalise when applied to wider data sets. This defeats the purpose. It usually happens when the model takes a very literal view of the historical data. So instead of using the data as a guide to learn from, it practically memorises the data and all its characteristics as they are (verbatim). Consider a simplified example of a machine learning model for identifying suspicious transactions that need further investigation. During the training phase, the model observed that a high proportion of transactions that turned out to be suspicious occurred outside normal office hours. It therefore attached a high weight to this feature, the timestamp of the transaction, a

注意事项

本文(可解释的人工智能:以用户为核心(英文版).pdf)为本站会员(幸福)主动上传,报告吧仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知报告吧(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。




关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

copyright@ 2017-2022 报告吧 版权所有
经营许可证编号:宁ICP备17002310号 | 增值电信业务经营许可证编号:宁B2-20200018  | 宁公网安备64010602000642号


收起
展开