欢迎来到报告吧! | 帮助中心 分享价值,成长自我!

报告吧

换一换
首页 报告吧 > 资源分类 > PDF文档下载
 

特斯拉Autopilot的实验性安全研究(英文版).pdf

  • 资源ID:96074       资源大小:3.03MB        全文页数:40页
  • 资源格式: PDF        下载积分:15金币 【人民币15元】
快捷下载 游客一键下载
会员登录下载
三方登录下载: 微信开放平台登录 QQ登录  
下载资源需要15金币 【人民币15元】
邮箱/手机:
温馨提示:
用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,下载共享资源
 
友情提示
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,既可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

特斯拉Autopilot的实验性安全研究(英文版).pdf

Experimental Security Research of Tesla Autopilot Tencent Keen Security Lab 2019-03 Table of Contents Abstract . 1 Research Target . 1 Background . 1 Autopilot . 2 Vision . 4 Preprocessing . 5 Remote Steering Control . 9 CAN Bus System . 10 APE2LB_CAN . 11 DasSteeringControlMessage . 11 Remotely Control the Steering System . 13 Autowipers . 15 Implementation Details of Autowipers . 16 Digital Adversarial Examples . 19 Adversarial Examples in Physical World . 22 Lane Detection . 25 Implementation Details of Lane Detector . 26 Eliminate Lane attack . 30 Fake lane attack . 33 Conclusion . 35 References . 36 Appendix . 37 1 Abstract Keen Security Lab has maintained the security research work on Tesla vehicle and shared our research results on Black Hat USA 2017 1and 2018 2in a row. Based on the ROOT privilege of the APE (Tesla Autopilot ECU, software version 18.6.1), we did some further interesting research work on this module. We analyzed the CAN messaging functions of APE, and successfully got remote control of the steering system in a contact-less way. We used an improved optimization algorithm to generate adversarial examples of the features (autowipers and lane recognition) which make decisions purely based on camera data, and successfully achieved the adversarial example attack in the physical world. In addition, we also found a potential high-risk design weakness of the lane recognition when the vehicle is in Autosteer mode. The whole article is divided into four parts: first a brief introduction of Autopilot, after that we will introduce how to send control commands from APE to control the steering system when the car is driving. In the last two sections, we will introduce the implementation details of the autowipers and lane recognition features, as well as our adversarial example attacking methods in the physical world. In our research, we believe that we made three creative contributions: 1. We proved that we can remotely gain the root privilege of APE and control the steering system. 2. We proved that we can disturb the autowipers function by using adversarial examples in the physical world. 3. We proved that we can mislead the Tesla car into the reverse lane with minor changes on the road. Research Target The hardware and software versions of our research target are listed below: Vehicle Autopilot Hardware Software TESLA MODEL S 75 2.5 2018.6.1 Background On Black Hat USA 2018, we demonstrated a remote attack chain to break into the Tesla APE Module (ver 17.17.4). Here is a brief summary of our remote attack chain, the attack chain has been fixed after we reported to Tesla, and more details can be 2 found in our white paper. 3Fig 1. remote attack chain from 3G/WIFI to Autopilot ECU Our series of research have proved that we can remotely obtain the root privilege of APE. We are highly curious about the impact of APEs cybersecurity on vehicles, for example whether hackers can analyze and compromise APE to implement unauthorized high-risk control of vehicles. Through deep research work on APE (ver 18.6.1), we constructed three scenarios to demonstrate our findings. Here wed like to mention that, our security research on APE is based on static reverse engineering and dynamic debugging. However, the autowipers and road lane attack scenarios do NOT need to root the target Tesla vehicle first. Autopilot Tesla Autopilot, also known as Enhanced Autopilot after a second hardware version started to be shipped, is an Advanced Driver-Assistance System feature offered by Tesla that provides sophisticated Level 2 autonomous driving. It supports features like lane centering, adaptive cruise control, self-parking, ability to automatically change lanes with drivers confirmation, as well as enabling the car to be summoned to and from a garage or parking spot. Tesla Autopilot system primarily relies on cameras, ultrasonic sensors and radar. In addition, Tesla Autopilot comes loaded with computing hardware from manufactures like Nvidia, that allows the vehicle to process data using deep learning to react to conditions in real-time. 3 APE, “Autopilot ECU“ module, is the key component of Tesla's auto-driving technology. Though there have been many articles talking about its hardware solution (especially “verygreen” on TMC 4 ), there is much less discussion about its software. As we have known, currently all APE 2.0 and 2.5 boards are based on Nvidia's PX2 AutoChauffeur 5(actually a highly customized one 6 ). Our test car is using APE 2.5, so that our discussion mainly focuses on the APE 2.5 board. Here is a simple graph showing how the internal components are connected. Note that this graph omits all other connections which are not related to our research. Fig 2. Overview of connections on APE module Both APE and APE-B are Tegra chips, same as Nvidia's PX2. LB (lizard brain), is an Infineon Aurix chip. Besides, there is a Parker GPU (GP106) from Nvidia connected to APE. Software image running on APE and APE-B are basically the same, while LB has its own firmware. On the APE part, LB is a coprocessor and supports features like monitoring messages on CAN bus, controlling fan speed, determining whether the 4 APE parts should be turned on, etc. On original PX2 boards the Aurix chips have a console running on the serial port with several useful functions. But on APE 2.5, this chip only provides very few commands on the console. Not both APE and APE-B are used for Autopilot, especially considering that not both chips are connected to all sensors. Information from radars and other sensors are transmitted via some CAN buses (including private ones), and forwarded by LB to UDP messages, which can be received by both processors. However, all cameras, especially main, narrow and fisheye, which are primary cameras for the autopilot functions, are only connected to APE via CSI interfaces. Also, the GPU chip is only connected to APE, and we did not see enough evidence showing that two Tegra chips (as well as the cameras) are sharing the GPU chip. Thus we think APE-B is only something like a “stub function“ and APE is the actual chip performing real works. A later investigation to the firmware shows that APE-B might, sometimes, boot from the same image used for starting up APE. The boot process makes us believe that as long as APE and APE-B running the same firmware, we can easily implement our attacks. The firmware of APE is a SquashFS image without any encryption. The image is running a highly customized Linux (like “CID” and “IC”). In the firmware, we observed that binaries of APE software are under “/opt/autopilot” folder. Vision In this section, we will introduce the implementation details of the Tesla Autopilot modules vision system. The binary “vision” is one of the key components of Autopilot. Autopilot uses it to process the data collected from all cameras. We did a lot of reverse work on the two functions of autowipers and lane recognition which use a pure computer vision solution. The special process of these functions can be summarized to two parts: their common preprocessing, and their own neural network calculation and postprocessing. 5 Preprocessing We think Tesla is using a 12-bit HDR camera, possibly RCCB. The neural network model for vision is not designed to process those images directly. Thus the program needs to preprocess the image first. As mentioned previously, the communication between different executable files (or services) is going through the shared memory, including the original image fetched from the camera. Those images are fetched from certain file handles according to a schedule map. Fig 3. Buffers are managed by a select() model Besides, the vision task would also take some control messages from /dev/i2c and other shared memory areas. For diagnostic and product improvement purposes, a copy of the image will also be saved into the shared memory, so the snapshot task can get and send it. Snapshot task has a large number of record points in different tasks, which makes debugging and feature development work more efficient. The raw data gathered from the snapshot is in HDR, 1280x960 and 16-bit little-endian integer, and the tone mapped image is shown below (may be inaccurate). 6 Fig 4. Tone mapped image from the camera We have previously mentioned about the function tesla:TslaOctopusDetector:unit_process_per_camera, which would process each frame from every camera, including the preprocessing procedure. A few prefixes and suffix lines are firstly removed from the image. According to the datasheet 7provided by ON semiconductor for AR0132AT(which might not be Tesla's sensors, but probably a similar model), those lines might be used only for pixel adjustment and diagnostic purposes, so we assume the autopilot task is not using those pixels. The next process is tone mapping, to adjust the dynamic range of HDR images from the camera, and make them fit into the input model of the neural network. In earlier versions, this image is processed by tmp_cuda_exp_tonemapping, and now the renamed function is tesla:t_cuda_std_tmrc:compute, which has lots of improvements. t_cuda_std_tmrc has several outputs, including: * linear_signal, after HDR conversion and range compression of the raw image; 7 * detail_layer, result of the boundary detection, which may use canny edge detector with some improvements; * bilateral_output, could be the result of some bilateral filter, but we failed to get its results; Moreover, the output also contains some other layers, but since it is not much related to our research, we are not going to mention them here. The preprocessing towards different cameras can be different. Though currently, we have only noticed a demosaicing control boolean in the code, we believe it is easy to add different preprocessing filters to different cameras. The output of preprocessed images is then processed through several different modules according to their type and position. Currently, we have observed three different types: * 0 for “Primary camera“, possibly “main“ camera * 1 for “Secondary camera“, possibly “narrow“ camera * 2 for other cameras And an enum is used to represent all cameras positions: 8 Fig 5. Enum possibly used to mark different cameras Fig 6. “main”, “narrow” and “fisheye” camera on the vehicle. (By the way, we noticed a camera called “selfie“ here, but this camera does not exist on the Tesla Model S.) Generally, those processed images will all be written to input buffers of their corresponding neural network. Each neural network parses input images, and provides information for tesla:t_inference_engine. Various post processors 9 receive those results to give control hints to the controller. Those post processors are responsible for several jobs including tracking cars, objects and lanes, making maps of surrounding environments, and determining rainfall amount. To our surprise, most of those jobs are finished within only one perception neural network. The complexity of autopilot tasks requires different cameras assigned with different inference engines, configured with different detectors, and filled with several different configurations. Therefore, Tesla uses a large class for managing those functions (about “large“: the struct itself is nearly 900MB in v17.26.76, and over 400MB in v2018.6.1, not including chunks it allocates on the heap). Parsing each member out is not an easy job, especially for a stripped binary, filled with large class and Boost types. Therefore in this article, we wont introduce a detailed member list of each class, and we also do not promise that our reverse engineering result here is representing the original design of Tesla. In the end, the processed images are provided to each network for forwarding prediction. Remote Steering Control In this section, we will introduce how the APE unit works with the EPAS (Electric Power Assisted Steering) unit to achieve the steering system control. Moreover, since weve got root access of APE, we will demonstrate how to remotely influence the EPAS unit to control Tesla cars steering system in different driving modes. APE is the core unit of Teslas Advanced Driver Assistance System. Its responsible f

注意事项

本文(特斯拉Autopilot的实验性安全研究(英文版).pdf)为本站会员(智能音箱)主动上传,报告吧仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知报告吧(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。




关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

copyright@ 2017-2022 报告吧 版权所有
经营许可证编号:宁ICP备17002310号 | 增值电信业务经营许可证编号:宁B2-20200018  | 宁公网安备64010602000642号


收起
展开