Hackers cheat Tesla into the wrong lane
Hackers have show Some unpleasant ways of treating and mixing the different systems on Tesla Model S. Their most dramatic sport: sending the car stepped into the traffic lane coming towards you by placing a series of stickers small on the road.
Vector attack: This is an example of an “adversarial assault”, a way of treating a machine learning model by inputting a particular input into a special. Adverse attacks could become more common as machine learning is used more widely, especially in areas such as network security.
Unclear lines: The Tesla Autopilot is vulnerable because it recognizes lanes using a computer vision. That is, the system relies on camera data, analyzed by a nervous network, to tell the vehicle how to keep it central within its lane.
Traffic jams: This is not the first adversarial attack on an independent driving system. Dawn Song, a teacher at UC Berkeley, has used it stickers that look harmless to cheat a self-drive car to think that a speed limit was a stop limit for 45 miles per hour. Other study, published in March, shows how medical engineering learning systems can be deceived into the wrong diagnosis.
Bug Solutions: The researchers behind the lane hat recognition, from the Keen Safety Lab Tencent, the Chinese giant technology, used a similar attack to disrupt the vehicle's automatic wind dryers. They also kidnapped the steering wheel of the car using another method. Tesla spokesman Forbes that the last vulnerability has to be specified in its latest software update. The spokesman said the adversarial attack was unrealistic “given that a driver can easily rule out the pilot at any time.”
Image credit:
- The Tesla Model S used by Tencent Keen Security Laboratories researchers.
Source link