Thursday , October 22 2020

Behold, the limitations of computer vision.

Turns Out It’s Really Easy to Trick Tesla’s Autopilot With a Cheap Projector

It’s unsettling to imagine that a $300 trick could fool your considerably-more-expensive Tesla Autopilot system, and yet, a team of researchers from Ben-Gurion University of the Negev and Georgia Tech have pulled it off.

A cheap projector system displaying false speed limit signs in trees or shining a Slenderman-like figure onto the road can actually force Autopilot to change behavior, adjusting speed to the “road signs” and slowing down for what it thinks might be a pedestrian (nevermind the fact that the car still runs over the projection).

These so-called “phantom objects” prove that computer vision still has a long way to go before self-driving cars can ever truly be reliable as alternatives for mass transit or personal car ownership. Accordingly, the researchers refer to their efforts as a “perceptual challenge.”

But this experiment isn’t about monkeying around—this a real security and safety hazard, the researchers point out in a new paper.

“We show how attackers can exploit this perceptual challenge to apply phantom attacks … without the need to physically approach the attack scene, by projecting a phantom via a drone equipped with a portable projector or by presenting a phantom on a hacked digital billboard that faces the Internet and is located near roads,” they write in the abstract.

In Beersheba, Israel—home of Ben-Gurion University of the Negev—Ben Nassi, lead author of the projector paper, used a cheap, battery-operated projector and a drone to cast an image of the frightening figure onto the pavement. He wanted to see if he could create a spoofing scenario that any hacker could easily replicate without having to reveal their identity.

Nassi tested out his theory against Tesla’s Autopilot, as well as Mobileye 630 PRO, another of the most advanced automated driver systems, which is used in cars like the Mazda 3. He projected an image of a vehicle onto the street, which the Model X picked up on; created false speed limit signs, which were detected; and even created fake street lines that forced the Tesla to switch lanes.

These are all examples of “phantom objects,” which Nassi describes as a depthless object that causes automated driving systems to perceive it and consider it real, leading to all sorts of unintended consequences.

Nassi says phantoms aren’t just a concern in the wild through projector methods like his own, but that these false positives could also be embedded into digital billboards, which are often in a car’s frame of vision. In the image below, focus your eyes on the top left-hand corner. You should notice a sneaky phantom lurking around that could cause a car to speed up or slow down to around 55 miles per hour. The symbol only appears for 125 milliseconds, but it could cause a massive car accident.

This isn’t the first time researchers have made autonomous vehicles look foolish, if not completely blind.

A May 2018 paper from Princeton University and Purdue, for example, showed that bad actors could easily create “toxic signs” that mean something different to computers than people. The signs’ peculiarities will be invisible to human eyes, but can have dire consequences on autonomous vehicles’ vision systems.

“These attacks work by adding carefully-crafted perturbations to benign examples to generate adversarial examples,” the authors write. “In the case of image data, these perturbations are typically imperceptible to humans.”

In another case, researchers conducted what they call a “disappearance attack” to hide a stop sign from a deep neural network. By simply covering the real stop sign with an adversarial stop sign poster or adding stickers to the stop sign, the neural net was confounded.

Nassi and his team refer to this inability of automated vehicles to double-check what they’re seeing as the “validation gap.”

The solution is simple, they posit: Manufacturers of automated driving systems should be working on communication systems to help the computer vision systems double-check that what it’s seeing is reality. This is a widely accepted viewpoint, they say, but key stakeholders have delayed the production of these tools that could rule out 2D objects like the Slenderman projection.

When these eventually roll out, the systems will allow vehicles to sort of talk to one another to determine if they’re seeing the same thing. In other cases, vision systems installed on buildings or other infrastructure could also communicate with the cars. This is a vision of a connected world that will probably require 5G to work, but is entirely plausible.

Until those communication aids hit the mass-market, definitely keep your eyes open and alert while driving your Tesla.

This Article was first published on popularmechanics.com

About IT News Ug

Check Also

Wire targets Zoom, Teams and others with secure video upgrades

Wire, which touts its ability to offer secure video communications, now allows up to 12 users on a video call in a fully end-to-end encrypted environment.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

//azoaltou.com/afu.php?zoneid=2572107