Research

Voice Assistants Re-Heating Old Security Issues

Written by Reflare Research Team | May 11, 2018 1:15:00 PM

Security concerns for voice assistants have long existed (when was the last time you watched an AI movie and thought “oh, I’ve got nothing to worry about here”). However, as voice assistants become more prevalent in our world, we should be ready for the scale of security challenges that come with them.

First Published 11th May 2018

"Hey Siri, what's zero divided by zero?"

4 min read  |  Reflare Research Team

As AI assistants become more and more commonplace, they are also entrusted with increasing responsibilities. While controlling the lights or playing music had very few security implications, more recent features such as controlling door locks, making calls, ordering items or sending messages can have far-reaching consequences. An AI assistant with the ability to place autonomous calls - as shown in Google’s recent demonstration of the Google Assistant making a haircut appointment - could very reasonably be abused to empty a person’s bank account as well. After all, many banks will accept calls from the account holder's registered phone number as proof of identity.

None of this is new. Attacks impersonating individuals have sprung up with every new medium of communication. From fake social media accounts to forged email addresses to impersonating calls to false identities on telegrams to physical mail fraud - the problem is as old as indirect human communications. However, unlike the other communication methods, we are not yet used to treating AI assistants in a security context.

The problem with AI assistants specifically.

AI assistants are a somewhat recent innovation. Only 10 years ago, the accuracy of voice recognition was so poor that practical applications were limited to the most basic of commands.

An increase in processing power, the evolution of hardware, advances in AI research, and the establishment of mobile data networks (which allow a lot of the heavy processing required for voice recognition to be offloaded to the cloud) have brought voice recognition and AI assistant technology from the realm of science fiction into everyday use.

Unfortunately, while AIs have become capable of understanding human voices, they are not yet capable of accurately identifying humans by their voices. Abuse of this issue has been benign so far - from Burger King’s controversial ad prompting Google Devices to read burger ingredients to an episode of the animated show South Park tricking Alexa devices into shouting obscene phrases. The malicious abuse tactics are mostly still in the research stage.

Instead of sending unexpected but perceivable commands via TV, radio or streaming providers, researchers have found ways to hide voice commands in white noise or high-frequency bands. Most of these attacks exploit the relative weakness of human hearing when compared to microphones. While human hearing cuts off around 20khz, most smartphone and smart speaker microphones can pick up sounds well beyond this limit. So a voice command played back at the very high frequency of 23khz will be inaudible to almost all humans but clearly received, interpreted and executed by AI assistants.

Of course, there are countermeasures for all of these attacks. Frequencies can be limited, devices can require additional authentication and voice identification can be improved. However as these technologies are either non-existent or under heavy development at this point, it is advisable to treat AI assistants as potential rogue agents for the foreseeable future.

Every person interested in owning an AI assistant must decide for themselves whether the benefits outweigh the risks in their case. Households with high automation (e.g. smart locks) or high-risk profiles (e.g. public figures) should take extra care to avoid their assistants being used against them.