Keep Alexa away from all windows: Turns out hackers can shine lasers at your Google Assistant or Amazon Alexa-enabled devices and gain control of them, sending commands to the smart assistants or obtaining your valuable account information.
Researchers proved this by using lasers to inject malicious commands into voice-controlled devices like smart speakers, tablets, and phones across long distances through glass windowpanes.
In a new paper, “Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems,” the authors describe the laser-based vulnerability as a “signal injection attack” on microphones based on the photoacoustic effect, which converts light to sound through a microphone. From the abstract:
User authentication is lacking or nonexistent on voice assistant devices, the researchers also found. That meant they could use the light-injected voice commands to unlock the victim’s smartlock-enabled doors, shop on their e-commerce sites, use their payment methods, or even unlock and start vehicles connected to the victim’s Google account.
The research—funded by the Japan Society for the Promotion of Science and the U.S. Defense Advanced Research Projects Agency (DARPA), among other organizations—was meant to uncover possible security vulnerabilities in Internet of Things devices.
“While much attention is being given to improving the capabilities of [voice controlled] systems, much less is known about the resilience of these systems to software and hardware attacks,” the authors write in the paper. “Indeed, previous works already highlight a major limitation of voice-only user interaction: the lack of proper user authentication.”
So how does it work? The mics can convert sound into electrical signals, sure, but they also react to light aimed directly at them. By using a high-intensity light beam, modulating an electrical signal, the researchers could trick the microphones embedded in the voice devices into producing electrical signals as if they were receiving real audio from your voice, for example. That means hackers can then inject inaudible commands through the light beams.
Devices are vulnerable from up to 110 meters away, as of the time of the paper’s publication. That’s about the length of a football field.
Pretty much any voice-enabled device you can imagine is vulnerable to this kind of attack, but the authors have tested and confirmed vulnerabilities in the following:
Perhaps the most disturbing part is how easy this kind of hack is. All you need, according to the research group, are a simple laser pointer, a laser diode driver (which keeps a constant current of power supply to the laser), a sound amplifier, and a telephoto lens to focus the laser better for long-range attacks.
While this research and the associated hacks were conducted by professional researchers, criminals could do the same (though they found no instances of this happening maliciously … yet). However, with the methodology out there now, it’s a double-edged sword: Yes, some consumers may read about this vulnerability and prepare against it, but now hackers also have new ideas.
It’s pretty difficult to tell if you’re being attacked this way, but users may notice the light beam’s reflection on the device in question or a user can try to monitor the device’s verbal response and light pattern changes. If you notice your device acting erratically, more or less, unplug the damn thing and move it away from the window.
The good news is that there are ways to protect yourself from a laser-based attack, according to the paper: