世界经济论坛-儿童人工智能(英)-2022.3-38页_1mb.pdf
Artificial Intelligence for ChildrenTOOLKITMARCH 2022Contents 2022 World Economic Forum. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, including photocopying and recording, or by any information storage and retrieval system.Disclaimer This document is published by the World Economic Forum as a contribution to a project, insight area or interaction. The findings, interpretations and conclusions expressed herein are a result of a collaborative process facilitated and endorsed by the World Economic Forum but whose results do not necessarily represent the views of the World Economic Forum, nor the entirety of its Members, Partners or other stakeholders.Introduction1 C-suite and corporate decision-makers checklistWhere companies can fall shortActionsThe rewards of leading 2 Product team guidelinesFoundational principles The challengeDefinition of children and youthSocial networksOverarching limitationsPutting children and youth FIRSTFair Inclusive Responsible Safe Transparent3 AI labelling system4 Guide for parents and guardiansBenefits and risksContributorsEndnotes367891011131314141515171921242931313335Images: Getty ImagesArtificial Intelligence for Children 2This toolkit is designed to help companies develop trustworthy artificial intelligence for children and youth.IntroductionFor the first time in history, a generation of children is growing up in a world shaped by artificial intelligence (AI). AI is a set of powerful algorithms designed to collect and interpret data to make predictions based on patterns found in the data.Children and youth are surrounded by AI in many of the products they use in their daily lives, from social media to education technology, video games, smart toys and speakers. AI determines the videos children watch online, their curriculum as they learn, and the way they play and interact with others.This toolkit, produced by a diverse team of youth, technologists, academics and business leaders, is designed to help companies develop trustworthy artificial intelligence (AI) for children and youth and to help parents, guardians, children and youth responsibly buy and safely use AI products.AI can be used to educate and empower children and youth and have a positive impact on society. But children and youth can be especially vulnerable to the potential risks posed by AI, including bias, cybersecurity and lack of accessibility. AI must be designed inclusively to respect the rights of the child user. Child-centric design can protect children and youth from the potential risks posed by the technology.AI technology must be created so that it is both innovative and responsible. Responsible AI is safe, ethical, transparent, fair, accessible and inclusive. Designing responsible and trusted AI is good for consumers, businesses and society. Parents, guardians and adults all have the responsibility to carefully select ethically designed AI products and help children use them safely. What is at stake? AI will determine the future of play, childhood, education and societies. Children and youth represent the future, so everything must be done to support them to use AI responsibly and address the challenges of the future. This toolkit aims to help responsibly design, consume and use AI. It is designed to help companies, designers, parents, guardians, children and youth make sure that AI respects the rights of children and has a positive impact in their lives. Artificial Intelligence for ChildrenMarch 2022Artificial Intelligence for Children 3Who are you?A corporate decision-maker, member of a product team or a parent or guardian?Corporate usersThe checklist for C-suite executives and guidelines for product teams contain actionable frameworks and real-world guidance to help your company design innovative and responsible AI for children and youth. By using these guidelines, you can lead as a trusted company that delights your child userspanies should keep in mind that children often use AI products that were not designed specifically for them. Its sometimes difficult to predict what products might later be used by children or youth. As a result, you should carefully consider whether children or youth might be users of the technology youre developing. If they are, you should carefully consider how to help increase the benefits and mitigate potential risks posed by the technology for children and youth. The C-suite is responsible for setting the culture around responsible AI and strategy for and investment in AI products. The checklist is designed to help executives learn more about the benefits and risks of AI for children and youth so you can better lead, innovate and grow.Read more about the C-suite checklist.Product teams design, develop and deploy the AI technology that children and youth will use. Responsible design starts with product teams and continues to be their ongoing responsibility. The guidelines are designed for engineers, developers, product managers and other members of the product team to use throughout the product life cycle. Companies should keep in mind that children often use AI products that were not designed specifically for them.Putting children and youth FIRST checklistFairInclusiveResponsibleSafeTransparentArtificial Intelligence for Children 4Consumers: Parents and guardians Parents and guardians decide which AI-powered technologies to buy for their children. By educating yourselves and better understanding the benefits and risks posed by the technology, you can make deliberate and informed decisions that can protect your children and be sure AI has a positive impact on their lives.Learn more about the Guide for parents and guardiansThe tool for parents and guardians is designed based on the AI labelling system (Figure 1) to understand these six important categories of AI. AI labelling system The AI labelling system is designed to be included in all AI products on their physical packaging and online accessible through a QR code. Like nutritional information on food packaging, the labelling system is designed to concisely tell consumers including parents and guardians, as well as children and youth how it works and the options available to the users. All companies are encouraged to adopt this tool to help create greater trust and transparency with the purchasers and child users of their products. Learn about the AI labelling system Age Data useAI use NetworksSensors AccessibilityAI labelling system FIGURE 1Source: World Economic ForumArtificial Intelligence for Children 5C-suite and corporate decision-makers checklist1Actionable frameworks and real-world guidance help companies design innovative and responsible AI for children and youth.This checklist is for C-suite executives of companies that provide products and services incorporating artificial intelligence (AI) intended for use by children and youth. Many companies use AI to differentiate their brands and their products by incorporating it into toys, interactive games, extended reality applications, social media, streaming platforms and educational products. With little more than a patchwork of regulations to guide them, organizations must navigate a sea of privacy and ethics concerns related to data capture and the training and use of AI models. Executive leaders must strike a balance between realizing the potential of AI and helping reduce the risk of harm to children and youth and, ultimately, their brand. Building on a foundation established in the World Economic Forum “Empowering AI Leadership: AI C-Suite Toolkit”, the checklist is intended to help C-suite executives and other corporate decision-makers reflect upon and act to create and support responsible AI for this vulnerable population.Trusted and responsible AI for children and youth: A checklist for executivesAttracted by the extraordinary opportunity to innovate with AI, companies are moving at a record pace to incorporate AI into toys, broadcast and social media, smart speakers, education technology, virtual worlds and more. AI ranges in complexity and impact from simple recommendation and customization engines to deeply immersive experiences that imitate and simulate human behaviour, emotions and interactions. Implemented thoughtfully, these systems can delight, teach and evoke interaction with their young users, enabling them to grow and develop at their own pace and according to their learning styles. But implemented without careful forethought and the guidance of child development experts and ethicists, AI can hinder development and infringe on the rights of vulnerable users. With the checklist, leaders can learn how even companies that mean well overlook potential issues and how to mitigate the risks associated with AI adoption. Executives should aspire to the highest possible ethical and social standards regarding child development, suitability for purpose, non-bias, accessibility and privacy. Doing so provides tremendous potential beyond the opportunity to do good. It can elevate your brand and enable you to position your company as a trustworthy steward of sought-after products and services to your primary buyers: parents, grandparents, teachers, educators and other care providers. Artificial Intelligence for Children 6Given the acceleration of AI adoption and a lag in broadly accepted standards and guidance, leaders might be caught off guard. What are the riskiest behaviours that your teams should avoid? Not disclosing how AI is used: Companies that think buyers may object to AI may conceal or downplay its use. Be transparent about the use of AI and why you are using it. Amplifying and perpetuating bias: AI modelling can contain inaccuracies and oversimplifications that lead to inaccessibility and bias against marginalized groups, such as disabled communities and users from different cultural and socio-economic backgrounds. Skipping user-focused validation: Bypassing user and expert validation of suitability for purpose during design and prototyping stages can diminish the potential value of AI and cause harm. Leaving privacy and security gaps: Data security, privacy and consent to collect and use data are complicated due to cybersecurity threats and a patchwork of regulations that vary geographically. These concerns reach past the useful life of a product: For minors, parents provide consent, but their children may claim their right for their data to be forgotten as they get older.With these potential stumbling blocks in mind, what steps can corporate leaders take to protect and enhance their brand while leveraging the remarkable potential of AI?Where companies can fall short Executive leaders should create a culture of responsibility backed by resources that enable responsible AI from design to end-of-product use and beyond.Artificial Intelligence for Children 7Executive leaders should create a culture of responsibility backed by resources that enable responsible AI from design to end-of-product use and beyond. These steps are recommended:1. Know the legal duties and regulatory constraints: Leverage existing guidance, such as the Institute of Electrical and Electronics Engineers (IEEE) Code of Ethics,1 UNICEFs Policy Guidance on AI for Children2 and World Economic Forum guidance,3 as well as the guidance contained in this toolkit and guidelines for the product team, AI labelling system, and resources for parents and guardians and children and youth. Commit to internal and, if possible, external AI oversight. Report compliance and leadership measures publicly and in simple language so buyers can understand.2. Build a diverse and capable team: Include ethicists, researchers, privacy specialists, educators, child development experts, psychologists, user-experience (UX) designers and data scientists. Collaborate with non-profit organizations and educational and research institutions for more expertise. 3. Train your team and provide resources for success with this checklist: Educate team members about the importance of responsible and trustworthy AI and provide them access to the skills, tools and time they need to execute your vision. Have open dialogue about unintended consequences, possible worst-case scenarios, and the reasons for ensuring your teams are considering the five AI characteristics critical to putting children and youth FIRST (Figure 2). For more information, refer to the product team guidelines, which offers detailed guidance on the five areas.4. Offer expertise to inform development of regulations, standards and guidance: Contribute to public forums on how AI is being used in your products or services. Share your experience in proposing guidance and requirements. 5. Welcome principled efforts to label products and services: These should be done according to the potential impact of AI on users. Endorse and participate in activities to develop labelling and rating standards. Label your offerings to help consumers make informed choices based on recommendations about, for example, user age, accessibility factors and whether a camera and microphone are being used. For additional information about labelling recommendations, see the AI labelling system.ActionsPutting children and youth FIRST checklistFIGURE 2Company culture and processes address ethics and bias concerns regarding how AI models are developed by people and the impact of AI models in use.AI models interact equitably with users from different cultures and with different abilities; product testing includes diverse users.Offerings reflect the latest learning science to enable healthy cognitive, social, emotional and/or physical development. The technology protects and secures user and purchaser data, and the company discloses how it collects and uses data and protects data privacy; users may opt out at any time and have their data removed or erased.The company explains in non-technical terms to buyers and users why AI is used, how it works and how its decisions can be explained. The company also admits AIs limitations and potential risks and welcomes oversight and audits.Source: World Economic ForumArtificial Intelligence for Children 8The rewards of leadingWhen you deliver responsible AI-based offerings and engage in the development of standards, you can do much more than just affect your bottom line. You help young users grow into the best versions of themselves a generation empowered by AI. References Benjamin, Ruha, Race after Technology: Abolitionist Tools for the New Jim Code, Polity Books, 2019, Coeckelbergh, Mark, AI Ethics, MIT Press, 2020, mitpress.mit.edu/books/ai-ethics Dubber, Markus D., Frank Pasquale and Sunit Das (eds), The Oxford Handbook of Ethics of AI, Oxford University Press, 2020, ONeil, Cathy, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown Publishing Group, 2016, Russell, Stuart, Human Compatible: Artificial Intelligence an