Virtual assistants are increasingly present in our lives. Whether it is the device we carry in our pockets, those at home, or even children's toys. In this article, we will discuss whether smart home devices are invading our privacy.
Alexa, Siri, and Google sit by your side all day. They listen for their trigger word to be able to answer your questions. What most of us fail to realize is that they must constantly listen to our every move until the trigger words are spoken.
These types of smart devices are manufactured by several companies, some of which are better known and others less so.
However, it is worth mentioning is that individuals are responsible for their privacy and that it is necessary to evaluate what each company says about the data collection of their products.
You may think that data is discarded immediately after using any device. Nevertheless, it is not true.
Artificial intelligence (AI) learns from what it is listening to and from interactions with its owners. But did you ever stop to ask what is Amazon Alexa?
Suppose you feel uncomfortable with the idea of being overheard or your data being stored and evaluated by someone. In that case, it is time to think about how useful smart devices are in your life because you will need to compromise.
The best-known companies are Google, Amazon, Samsung, and Apple. And they have access to almost everything you give them permission to have.
Is it beneficial to us?
Did you ever wonder why google maps knows where traffic is heavy and where it isn't? Let me remind you that Google knows where you are and everyone else as well.
There are a lot of stories and theories about what the data is used for. And sometimes, these big companies can screw up. But if huge brands can make careless mistakes, imagine the ones with little to no experience!
And after we authorize the storage and collection of our data, this information is in the hands of third parties, who will analyze it and mostly use it to improve their business.
For instance, Facebook analyses how long you spend on each post and your likes and dislikes to give you ads worth your while. So, in a way, this data benefits your personal use.
But the factor that worries most people is the release of their personal information into the world. And who can get their hands on your personal life.
Do you feel safe?
For most people, home is a safe heaven. Where you can wear what you want, and do whatever makes you happy. But it is also the most private place.
Even if you think you have nothing to hide, the things you do inside your own home are private. And I do not know about you, but I would like to keep it that way.
Let's see some things that happened to companies that work with this type of system to draw our conclusions and learn that it is up to us to enforce our privacy.
Technology companies say they are not spying on their customers using electronic devices and that they only listen when expressly ordered to do so. So, it would be logical for our manifestations to be immediately deleted after use.
But that does not mean that no listening is happening, or could not happen, in ways that defy traditional notions of privacy. Saying, "OK, Google" starts this company's devices. Amazon's "Alexa" and so on.
But, once listening begins, what happens next?
Sometimes if Siri is unaware of what you have asked, if a statement needs to be sent to the cloud for further analysis at Apple, it is marked with an encoded identifier instead of the user's real name.
The data is saved for six months so that the speech recognition system can learn to understand it better. After that, another copy is saved, without your knowledge, to help improve Siri for up to two years.
Most companies do their best to evaluate the content of the information received by their creations and learn how they can improve the product.
From what they learn from consumers, companies can learn what else is needed and what else they can add to the product. But the review process can also reveal many things that happen with these devices.
Google's invasion of privacy
In 2017, Google invited journalists to launch a product in San Francisco. After the presentation, Google distributed Google Minis as a gift to the participants.
After a few days with the Mini, one of the journalists accessed the Internet to check his voice search activity and was shocked to see several short audios recorded.
This participant complained to Google and a website called Android Police, and the company sent a representative to exchange the defective device for two replacement units.
But did that solve the problem? Not at all. The problem was that the device was recording "random events," which can go unnoticed if we aren't aware of what is being recorded.
The problem was corrected through a software update. To further dismiss fears, the company announced that touch was permanently disabled on all Google Minis.
However, it was not enough for the Electronic Privacy Information Center, a consumer protection group.
In a letter dated October 13, 2017, he urged the Consumer Product Safety Commission to revoke the Google Mini because it "allowed Google to intercept and record private conversations in homes without the consumer's knowledge or consent."
Any user can log in to their Google or Amazon account and see a list of all queries. And that should happen in any company that makes these recordings!
As a Google user policy statement states: "The conversation history with Google Home and the Google Assistant is saved until you choose to delete it."
Is this a new privacy issue?
It depends on your point of view. And it becomes very complicated because there are different situations every day. For instance, is your browser an invasion of privacy?
Google and other search engines similarly retain all of your typed web queries unless you exclude them. Therefore, you can argue that voice archiving is more of the same. But for some people, being recorded feels much more invasive.
For law enforcement agencies to obtain recordings or data stored locally (on phones, computers, or smart home devices), they need to obtain a search and seizure warrant.
But privacy protection is considerably weaker after your voice has been transmitted to the cloud. If you have installed a device that you are listening to and transmitting to a third party, you have waived your privacy rights.
According to a Google transparency report, U.S. government agencies requested data on more than 170,000 user accounts in 2017.
And here comes the million-dollar question. Smart home devices are invading our privacy, but if it's for your safety, would you mind being recorded?
Companies employ password protection and data encryption to combat espionage, but tests by security researchers and hacker breaches demonstrate that these protections are far from infallible.
What about children?
A line of toys created a way of interaction between parents and children through messages recorded on the stuffed animals. They are exposed to toys, and other devices as well.
The message is transmitted via Bluetooth, and goes to a distant parent or relative. The father, in turn, could record a message on his phone and send it to the stuffed animal.
The problem is that the company Cloud Pets, which made these toys, placed the credentials of more than 800,000 customers, along with 2 million messages recorded between children and adults, in an easily detectable online database.
Hackers collected many of that data in early 2017. And even demanded a ransom for the company before releasing the illegally obtained content.
One of the big problems was the Bluetooth pairing between the toys. The smartphone application does not use encryption or require authentication.
Bluetooth typically has a range of about 10-30 meters; someone outside your home could easily connect to the toy, upload audio recordings, and receive audio from the microphone.
In their lab tests, hackers (hired for this job) successfully attacked the voice interfaces. Including Amazon, Apple, Google, Microsoft, and Samsung.
They tricked the voice AIs into visiting malicious websites. They did this by sending fake text messages and emails. And darkening the screen, and turning down the volume to help hide the attack.
They managed to get the devices to make illegitimate phone and video calls, which means that a hacker can hear and even see what's going on around the victim. But there are scenarios that confuses the question of surveillance.
Scenarios that brings up questions
By reviewing the QA chat logs in the manner described above, conversation designers can hear things that almost beg them to take action. There are several hypothetical scenarios:
What if a child says to an AI toy, "My dad beats my mom"? Or "My uncle touching me in a funny place"?
Designers think it would be a moral failure to ignore such admissions. But if they report what they hear to the police, they would be taking on the role of spies.
Feeling uncomfortable, they decided that the toy's response should be something like, "This sounds like something you should say to an adult you trust."
The limit to privacy
If in a conversation review there is a concern about the safety of a child or other people, what would be the appropriate privacy limit to be followed?
In this case are smart home devices are invading our privacy, or finding unusual ways to keep us safe?
Because their virtual assistants handle millions of voice queries a week, they have no employees monitoring user-by-user claims. Therefore, they will not detect an event immediately!
But companies train their systems to capture certain highly sensitive things that people can say.
For example, if Siri hears: "I want to kill myself." Siri replies: "If you are thinking about suicide, you may want to talk to someone on the National Suicide Prevention Lifeline."
But the problem with letting virtual assistants take care of us is that the role suggests a great responsibility with ill-defined limits. And everyone would have to achieve the same standard.
If you tell a virtual assistant that you are drunk, they offer to call a taxi. But if you don't and you are in a car accident, is the company in any way responsible for what the device failed to say? When should a listening device go into action?
If any device hears someone screaming: "Help, help, they are trying to kill me! Should the AI automatically call the police? "Will personal assistants be responsible for the knowledge they have?"
The bottom line
The uses of AI surveillance make it clear that you must thoroughly examine each of these technologies that you allow in your life.
And in case of doubt, do not agree to anything. Especially with big companies whose privacy policies cannot be easily understood. Always research first!
And keep in mind that these electronics are to help us, and you have the alternative of not being recorded. Better understand what you put inside your home and close to the people you love.
So, with all this in mind, do you think smart home devices are invading our privacy?