欢迎来到报告吧! | 帮助中心 分享价值,成长自我!

报告吧

换一换
首页 报告吧 > 资源分类 > PDF文档下载
 

金融中的机器学习可解释性:在违约风险分析中的应用(英文版).pdf

  • 资源ID:103867       资源大小:1.01MB        全文页数:44页
  • 资源格式: PDF        下载积分:9.8金币 【人民币9.8元】
快捷下载 游客一键下载
会员登录下载
三方登录下载: 微信开放平台登录 QQ登录  
下载资源需要9.8金币 【人民币9.8元】
邮箱/手机:
温馨提示:
用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)
支付说明:
本站最低充值10金币,下载本资源后余额将会存入您的账户,您可在我的个人中心查看。
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,下载共享资源
 
友情提示
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,既可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

金融中的机器学习可解释性:在违约风险分析中的应用(英文版).pdf

Code of Practice CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 Staff Working Paper No. 816 Machine learning explainability in nance: an application to default risk analysis Philippe Bracke, Anupam Datta, Carsten Jung and Shayak Sen August 2019 Staff Working Papers describe research in progress by the author(s) and are published to elicit comments and to further debate. Any views expressed are solely those of the author(s) and so cannot be taken to represent those of the Bank of England or to state Bank of England policy. This paper should therefore not be reported as representing the views of the Bank of England or members of the Monetary Policy Committee, Financial Policy Committee or Prudential Regulation Committee.Staff Working Paper No. 816 Machine learning explainability in nance: an application to default risk analysis Philippe Bracke, (1)Anupam Datta, (2)Carsten Jung (3)and Shayak Sen (4) Abstract We propose a framework for addressing the black box problem present in some Machine Learning (ML) applications. We implement our approach by using the Quantitative Input Inuence (QII) method of Datta et al (2016) in a realworld example: a ML model to predict mortgage defaults. This method investigates the inputs and outputs of the model, but not its inner workings. It measures feature inuences by intervening on inputs and estimating their Shapley values, representing the features average marginal contributions over all possible feature combinations. This method estimates key drivers of mortgage defaults such as the loantovalue ratio and current interest rate, which are in line with the ndings of the economics and nance literature. However, given the nonlinearity of ML model, explanations vary signicantly for different groups of loans. We use clustering methods to arrive at groups of explanations for different areas of the input space. Finally, we conduct simulations on data that the model has not been trained or tested on. Our main contribution is to develop a systematic analytical framework that could be used for approaching explainability questions in real world nancial applications. We conclude though that notable model uncertainties do remain which stakeholders ought to be aware of. Key words: Machine learning, explainability, mortgage defaults. JEL classification: C55, G21. (1) UK Financial Conduct Authority. Email: philippe.brackefca.uk (2) Carnegie Mellon University. Email: danupamcmu.edu (3) Bank of England. Email: carsten.jungbankofengland.co.uk (4) Carnegie Mellon University. Email: shayakslondon.edu The views expressed here are not those of the Financial Conduct Authority or the Bank of England. We thank seminar participants at the Bank of England, the MIT Interpretable MachineLearning Models and Financial Applications workshop, the UCL Data for Policy Conference, Louise Eggett, Tom Mutton and other colleagues at the Bank of England and Financial Conduct Authority for very useful comments. Datta and Sens work was partially supported by the US National Science Foundation under the grant CNS1704845. The Banks working paper series can be found at bankofengland.co.uk/workingpaper/staffworkingpapers Bank of England, Threadneedle Street, London, EC2R 8AH Email publicationsbankofengland.co.uk © Bank of England 2019 ISSN 17499135 (online)1 Introduction Machine learning (ML) based predictive techniques are seeing increased adoption in a number of domains, including nance. However, due to their complexity, their predictions are often dicult to explain and validate. This is sometimes referred to as machine learnings black box problem. It is important to note that even if ML models are available for inspection, their size and complexity makes it dicult to explain their operation to humans. For example, an ML model used to predict mortgage defaults may consist of hundreds of large decision trees deployed in parallel, making it dicult to summarize how the model works intuitively. Recently a debate has emerged around techniques for making machine learning models more explainable. Explanations can answer dierent kinds of questions about a models operation depending on the stakeholder they are addressed to. In the nancial context, there are at least six dierent types of stakeholders: (i) Developers, i.e. those developing or implementing an ML application; (ii) 1st line model checkers, i.e. those directly responsible for making sure model development is of sucient quality; (iii) management responsible for the application; (iv) 2nd line model checkers, i.e. sta that, as part of a rms control functions, independently check the quality of model development and deployment; (v) conduct regulators that take an interest in deployed models being in line with conduct rules and (vi) prudential regulators that take an interest in deployed models being in line with prudential requirements. Table 1 outlines the dierent types of meaningful explanations one could expect for a ma- chine learning model. A developer may be interested in individual predictions, for instance when they get customer queries but also to better understand outliers. Similarly, conduct reg- ulators may occasionally be interested in individual predictions. For instance, if there were complaints about decisions made, there may be an interest in determining what factors drove that particular decision. Other stakeholders may be less interested in individual predictions. For instance, rst line model checkers likely would seek a more general understanding of how the model works and what its key drivers are, across predictions. Similarly, second line model checkers, management and prudential regulators likely will tend to take a higher level view still. 2Table 1: Dierent types of explanations Note: lighter green means these questions are only partially answered through our approach. Stakeholder interest 1st line 2nd line model Manage- model Conduct Prudential Developer checking ment checking regulator regulator 1) Which features mattered in individual predictions? X X 2) What drove the actual predictions more generally? X X X X 3) What are the dierences between the ML model and a linear one? X X 4) How does the ML model work? X X X X X X 5) How will the model perform under new states of the world? X X X X X X (that arent captured in the training data) Especially in cases where a model is of high importance for the business, these stakeholders will want to make sure the right steps for model quality assurance have been taken and, depending on the application, they may seek assurance on what the key drivers are. While regulators expect good model development and governance practices across the board, the detail and stringency of standards on models vary by application. One area where standards around model due diligence are most thorough are models used to calculate minimum capital requirements. Another example is governance requirements around trading and models for stress testing. 1 In this paper, we use one approach to ML explainability, the Quantitative Input Inuence method of 1, which builds on the game-theoretic concept of Shapley values. The QII method is used in a situation where we observe the inputs of the machine learning model as well as its outputs, but it would be impractical to examine the internal workings of the model itself. By changing the inputs in a predetermined way and observing the corresponding changes in outputs, we can learn about the inuence of specic features of the model. By doing so for several inputs and a large sample of instances, we can draw a useful picture of the models 1 See for instance bankofengland.co.uk/-/media/boe/files/prudential-regulation/ supervisory-statement/2018/ss518. 3functioning. We also demonstrate that input inuences can be eectively summarised by using clustering methods 2. Hence our approach provides a useful framework for tackling the ve questions outlined in Table 1. We use this approach in an applied setting: predicting mortgage defaults. For many con- sumers, mortgages are the most important source of nance, and the estimation of mortgage default risk has a signicant impact on the pricing and availability of mortgages. Recently, technological innovations|one of which is the application of ML techniques to the estimation of mortgage default probabilities|have improved the availability of mortgage credit 3. We hence use mortgage default predictions as our applied use case. But our explainability approach can be equally valuable in many other nancial applications of machine learning. We use data on a snapshot of all mortgages outstanding in the United Kingdom and check their default rates over the subsequent two and a half years. In contrast with some of the most recent economics literature 4, we are interested in predicting rather than nding the causes of mortgage defaults. Thus we do not employ techniques or research designs to establish causality claims as understood in applied economics. Such claims would be necessar

注意事项

本文(金融中的机器学习可解释性:在违约风险分析中的应用(英文版).pdf)为本站会员(hello)主动上传,报告吧仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知报告吧(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。




关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

copyright@ 2017-2022 报告吧 版权所有
经营许可证编号:宁ICP备17002310号 | 增值电信业务经营许可证编号:宁B2-20200018  | 宁公网安备64010602000642号


收起
展开