家庭金融中的技术创新和启示(英文版).pdf
Finance and Economics Discussion Series Divisions of Research this exclusion itself can stem from discrimination. Technology can reduce the costs associated with extending credit and make it profitable to extend credit to households out of the mainstream majority community. Technology can increase access to information about the availability and pricing of financial products and services and thereby level the information playing field. However, a positive outcome for the role of technology in mitigating discrimination is not pre- ordained. Technology inherently has no animus, but is not immune from being discriminatory. Some households have more access to and more facility with technology than others, such that even innovations may reinforce patterns of exclusion. Technology can allow firms to target advertising and product offers very precisely to consumers, raising the possibility that households have different information sets and even face different prices, sometimes in breach of fair lending, equality, public accommodation, and civil rights laws. Decision-making by financial services providers via algorithms incorporates thousands of variables and presents courts, policy makers, and regulators with complex questions about how to think about and detect discrimination. And although algorithms have no inherent bias, they can incorporate the biases embedded in the broader culture through the datasets used in their development and through the biases of their development teams. This chapter brings together these benefits and friction points of how technology in finance can affect discrimination. The evidence indicates that technology is a powerful force for reducing discrimination stemming from human discretion (“taste-based” discrimination in economics parlance). But the net effect of technology as an abater of discrimination, especially looking into the future, is not obvious, and depends heavily on resolutions to legal and regulatory uncertainties surrounding the use of algorithms in what economists call “statistical” discrimination. If whether technology is net positive or negative for discrimination is the thread woven through our chapter, 3 the overall tapestry of our contribution lies in identifying the technological implementations that can lead to discrimination, particularly focusing on the interactions between financial service providers and households in human discretion, algorithmic decision-making, and innovation and inclusion. We identify five such gateways for discrimination: (i) human involvement in designing and coding algorithms, (ii) biases embedded in training datasets, (iii) practices of scoring customers for creditworthiness based on variables that proxy for membership in a protected class, especially through digital footprint and mobile data, (iv) practices of statistical discrimination for profiling shopping behavior, and (v) practices of technology-facilitated advertising, including ad targeting and ad delivery. Within these implementations of technology, we further identify four regulatory “frontlines.” We use the term frontline to connote two sentimentsa situation of uncertainty (in particular, as to whether the regulatory status quo will remain) and a setting of potential conflict (as legal protections of individuals confront forces of business use of technology). How these legal and regulatory frontlines are resolved will affect whether technology is on net positive or negative in the long run for discrimination. Our regulation frontlines focus on regulatory uncertainty concerning: (i) whether a variable is “correlated enough” with a protected class to be discriminatory itself, (ii) the use of input-based enforcement of large dataset algorithms versus output-based compliance, (iii) the extent to which privacy laws restrict algorithmic provision of financial services, and (iv) the applicability of public accommodation laws (also called equality laws) to disparities in access to online and mobile provision of financial services. These points of uncertainty affect not just legal tensions of how regulators and courts will act, but also the ability of financial service providers to innovate and the incidence of the benefits of innovation to consumers. Our chapter builds heavily on the works of other scholars that examine various settings or specific aspects of discrimination. We highlight these works as we proceed. Our contribution is in the amalgamation and analysis of ideas toward understanding the gateways of discrimination entering technological finance, the frontlines of regulation, and the weights for and against technology as 4 an abater of discrimination in financial services. In the process, we gain the insight of just how dramatically technology has changed the way discrimination manifests itself in financial services. II. Views of Discrimination: Lawyers and Economists The United States has comprehensive federal laws prohibiting discrimination in lending, and a patchwork of state and federal laws that cover, less comprehensively, other financial services. 2 Evans (2017) provides a review of these laws with an emphasis on their implications for fintech firms. In the United Kingdom (UK) and Canada, discrimination in financial services is prohibited under broader anti-discrimination laws. 3 The European Union (EU) recognizes non-discrimination as a fundamental right, but relegates specific legislation to member states, who in turn vary in their attention to discrimination legislation and enforcement. In practice, financial service providers have had more freedom in continental Europe to use protected characteristics for profiling, but this is changing, as the EU and UK take a leadership role in regulating the use of technology in finance. Discrimination laws cover a varying set of protected classes. Individuals are usually safeguarded against discrimination based on race, ethnicity, religion, marital or family status, and disability, and sometimes on additional characteristics such as age, gender, national origin, sexual orientation, gender reassignment, political views, genetic or biometric information, veteran status, and use of social safety nets. Discrimination laws in principle cover all the steps and practices involved in offering a financial service. However, the enforcement by which this principle is carried out varies from country to country on at least two dimensions. First, countries may differ in the intensity of the focus on the steps. The U.S. antidiscrimination laws and their implementing regulations generally have greater specificity as to the steps and practices covered; in fair lending, for instance, a lender in the U.S. cannot discriminate in advertising, credit risk assessment, and pricing of a loan. The UKs broad- based laws have some individual requirements and carve-outs but in general have less specificity 2 The main fair lending laws in the United States are the Equal Credit Opportunity Act and the Fair Housing Act. 3 The Equality Act 2010 prohibits discrimination in the provision of services in the UK. See Hale (2018) for a discussion of equality under the law in the UK. The Canadian Human Rights Act prohibits discrimination at the federal level in Canada. Discrimination may also be regulated at the provincial level for financial service providers that operate in only one Canadian province or territory. 5 than the U.S. 4 Second, countries may differ in the specificity of sectors to which antidiscrimination laws apply. Again, the U.S. code is more specifically written, delineating housing, credit, and employment as sectors with particularly detailed regulations. An advantage to specificity in preventing and enforcing is the attention to the particulars for the steps and sectors listed in the laws. An advantage to generality is in flexibility to expand to considering new steps and sectors as the provision of financial services expands and changes with technology. Of course, the question in the more general casea question being played out across the different country jurisdictions in the EUis whether a country will delve into discrimination compliance within the steps of provision if the law does not explicitly say so. Finally, all discrimination laws speak to direct discriminationtreating individuals differently on the basis of protected characteristics such as race, ethnicity, or gender. This is called disparate treatment in the U.S. U.S. regulators distinguish between “overt evidence” of disparate treatment, when a lender openly discriminates on a prohibited basis, and “comparative evidence” of disparate treatment, when a lender treats an applicant differently based on a prohibited basis. 5 Comparative evidence can encompass treating individuals differently on the basis of variables that are highly correlated with a prohibited basis. In the U.S., variables such as grey hair (for age in employment decisions) or targeted zones of zip codes (for minority neighborhoods in credit decisions) fall under this category and are generally considered direct discrimination, since these variables are masked versions of the original prohibited basis. 6 Some discrimination laws also encompass indirect discrimination, which is when a policy or practice that on its face seems neutral disadvantages a protected group indirectly through other variables. The Australian Human Rights Commission provides the example of a public building being only accessible by stairs as representing indirect discrimination against people with disabilities. This example is indirect because the lack of accessibility of a building is presumably not done to preclude people with disabilities but rather to save on costs. 7 In the U.S., the term disparate impact maps roughly to indirect discrimination. 4 This may be changing, as a new regulation took effect in June 2019 that forbids any advertising that includes gender stereotypes that are likely to cause harm (Safronova, 2019). 5 FDIC Consumer Compliance Examination Manual, Fair Lending Laws and Regulations, IV 1.1-1.2, September 2015, fdic.gov/regulations/compliance/manual/4/iv-1.1.pdf. 6 Rothstein (2017) discusses the historical roots of redlining. Pop (2013) describes a similar debate that unfolded in Germany. 7 humanrights.gov.au/quick-guide/12049 6 In contrast to legal views of discrimination, economists view discrimination through the lens of whether it is “taste-based” or “statistical.” Under taste-based discrimination (Becker, 1957), decision-makers get utility from engaging in prejudice, and are willing to sacrifice other prioritiessuch as hiring the most productive workers possiblein order to satisfy their biases. The much-cited culmination of Beckers theory is that taste-based discrimination cannot persist because other employers, who do not have a taste for prejudice, will hire workers based solely on their productivity. These non-discriminating firms will be more profitable than their prejudiced competitors, and the prejudiced firms will go out of business. As we discuss in section III, this culmination may not play out in practice. Under statistical discrimination (Arrow, 1973; Phelps, 1972), discrimination results from the practice of using variables as statistical discriminants to uncover unobserved variables. There are two crucial differences between statistical and taste-based discrimination for our purposes. First, statistical discrimination does not require employers or other decision-makers to have animus or negative taste toward a protected category (non-whites in the Beckerian formulation). Rather, decision-makers engage in statistical discrimination because they are missing information on a characteristic that is key to their decision, such as credit risk in the case of lending. In the formative theory models, a lender that lacks such information may try to recover proxies for credit risk by using the average credit risk of a group, where the group is defined by gender, race, ethnicity, or other characteristic. In practice, applying the averages by protected groups is illegal, but lenders may use other variables that correlate with a protected category to recover credit risk, and thereby implement statistical discrimination in a more general way than the original theories. Second, statistical discrimination, unlike taste-based discrimination, is profit-maximizing for the financial service providers. This finding implies that the target for using statistical determinants from the firm (and economist) perspective is profits, not uncovering the unobserved component of credit risk. We discuss in section IVa how this economists concept of statistical discrimination sometimes misaligns with the legal view. As a preview here, we note that the possibility that 7 financial service providers could be illegally discriminating while profit maximizing is an uncomfortable juxtaposition of the economists view of discrimination with the law. III. Human Decisions and Discretion III.a. Discriminatory Discretion Ameliorated by Technology Historically, lenders exhibit patterns in providing financial services that appear consistent with taste-based discrimination against certain types of individuals, even when acting on these biases has resulted in lower profits (Charles and Hurst, 2002; Bayer, Ferreira, and Ross, 2017; Alesina, Lotti, and Mistrulli, 2013; Deku, Kara, and Molyneux 2016; Dobbie, Liberman, Paravisini, and Pathania, 2018; and Bartlett, Morse, Stanton, and Wallace, 2019). In Beckers theory, taste-based discrimination is competed away by market forces. However, if the market is not fully competitive, or if the foregone profits associated with employees who discriminate are fairly small, taste-based discrimination can persist. This type of discrimination is particularly likely to emerge in settings where decision-makers have discretion. Technology has the potential to limit discretionary discrimination by providing information about financial services more broadly and at lower cost and by limiting the face-to-face interactions that appear to facilitate discrimination. Scott Morton, Zettelmeyer, and Silva-Russo (2003), for example, found that Black and Latinx car purchasers paid more than white purchasers when the sales negotiation took place in person but not when it occurred on the Internet. When humans are removed fully from negotiations, the decision-making becomes algorithmic, which has been found to reduce costly discriminatory discretion in many settings. For instance, Kleinberg, Lakkaraju, Leskovec, Ludwig, and Mullainathan (2018) show that a machine learning algorithm outperforms human judges in predicting which defendants will skip their next court appearance or commit crimes while out on bail, and does so without increasing racial disparities in the probability of being released on bail. In the realm of lending, Dobbie, Liberman, Paravisini, and Pathania (2018) show that a high-cost lender in the UK would