Deontology defines inherent human rights as being able to think, choose, and act. With the emergence of the internet of things and all that comes with it, these basic human rights, in terms of privacy, are called into question:
How can we trust a company that is harnessing every little detail about us on a day to day basis?
The idea of big data seems highly intrusive when compared to just 20 years ago when Google was just starting to take off. Back then, people would collect data in order to support a predetermined hypothesis. Afterwards, there was no longer any use for the data – or so we thought.
Nowadays, there are literal warehouses of data that are meant to preserve all sorts of information. If this information is intended to make all of our lives easier, then why are we concerned in the first place? Nobody has a problem telling their story to a salesperson face to face in order to receive better service, so why is it any different with today’s automation?
The case of the Amazon Echo is a great place to start when analyzing whether or not the harnessing of our data hinders or helps our ability to think, choose, and act. Through this deontological analysis, the Echo will eventually be deemed as a benign product that does not infringe on humanity’s right to privacy.
By now, most have heard of the Amazon Echo, a virtual assistant that is activated and controlled by voice. The handheld cylindrical device, known as Alexa, activates once her name is spoken.
Pause: wouldn’t Alexa always have to be listening to the world around her if she’s going to activate on voice command?
That’s correct. However, just because it’s listening doesn’t mean it’s recording. An article on Fox News reports that the Echo records the “wake word” and command, the wake word being “Alexa” and the command being “Play this song…” or “How’s the weather today?” The echo also records a fraction of a second of words spoken before the wake word. This is for the most part negligible because whatever is said before the wake word is likely to be related to the command, and probably can’t offer anymore insight than the command itself.
Fortunately, people can opt-out from sharing their data with Alexa. Amazon warns that this will degrade the user experience of the product, but there is plenty of other data available for Amazon to use to tailor user experience. The CAN-SPAM Act was passed in order to regulate email spam. The Act recognized this “opt-out” method as an acceptable way of doing business, as long as the company clearly communicates how to opt-out. This is vital in allowing the customer to think, choose, and act.
Method of collection
While the type of data that is collected from Echo users is no different than the kind of information that is gathered through online text search and clicks, the methodology is significantly different. Participating in an online survey is clearly intentional, whereas having your credit card history sold to a third party isn’t exactly a conscious and real-time decision on the consumer’s part.
With Alexa, data collection goes from monitoring website behavior to listening in on people’s “private” conversations that take place in their home. Sandeep Mittal, an accomplished cyber security specialist, notes that the right of privacy in general is “the right of the individual to be left alone; to live quietly, to be free from unwarranted intrusion to protect his name and personality from commercialisation.” How does the Amazon Echo hold up against this criteria for privacy? What does Mittal mean when he says that ones name and personality is to be protected from commercialisation?
Customer endorsements aside, Amazon is not making money off of individual names or identities, nor is any other company handling big data. However, they do make money off the personalities of their customers. A college student’s behavior and product preferences can certainly be used to get a glimpse at one’s lifestyle and personality. If these attributes are being used to increase sales, through Mittal’s eyes this might inherently encroach on our fundamental rights to privacy.
For starters, the customer bought the product and placed it inside his or her home. Secondly, customers can’t use the product without accepting the terms and agreement, a legally binding agreement in which the customer gives Amazon full permission to conduct whatever practices are listed within the terms agreement. The problem is that nobody reads those, so how transparent is it really?
The Data & Marketing Association (DMA), the United States’ guiding hand for best marketing practices, recently updated their data guidelines during these turbulent times of data use. It’s easy to tell that their overarching message they wanted to convey is the same that’s being discussed here: transparency. In terms of how companies inform their users of their data practices, the DMA says, “Entities should make their data practices available to Consumers in a prominent place on their website’s or application’s home page or in a place that is easily accessible from the home page or the functional equivalent.” The DMA also notes that companies should be clear about what data they are collecting and how it’s being used. This is something that all companies can improve on.
But that doesn’t mean that they don’t have the opportunity to.
Amazon’s not off the hook yet – and I mean yet – because these privacy statements alone don’t seem to be affecting our behavior. But a person doesn’t have to be cognisant of what is going on in order to have their rights compromised.
Deontology says that if someone is lied to, then their ability to think, choose, and act is hindered due to the false information provided. Considering the same logic, Amazon uses artificial intelligence to suggest products to people who may be (should be) interested. For someone that likes to do her own shopping and make her own decisions, doesn’t the automation process of personalizing suggestions take away the ability for her to make an owned decision? That’s a tough question.
On one hand, Amazon is helping make a decision. On another hand, Amazon is promoting a product that may not be relevant. In Viktor Mayer-Schönberger’s book, Big Data, he recalls the inception of Amazon product suggestions back when they were still exclusively selling books. Schönberger described an instance where someone purchased a book about Poland and was then bombarded with suggestions of books about Eastern European history. In this case, their suggestions don’t exactly make thinking, choosing, or acting any easier. Yet, over time, Amazon’s algorithms have grown to be much more accurate.
In fact, Schönberger notes that one third of Amazon’s sales revenue comes from automated suggestions. So at this point it is clear that Amazon’s suggestions are indeed resonating with its customers. This speeds up a consumer’s ability to think and choose so they can act on something that enhances their life.
Technology is ahead of legislature
Last year, authorities subpoenaed Amazon for their Echo data on a suspect involved in a murder case. The defendant gave Amazon his consent, so investigators had access to everything James Bates had said to Alexa. Amazon initially pushed back, but because of Bates’ consent the public will never know how Amazon see’s these situations through (until authorities try again).
Similar instances have occurred with Apple as government officials demanded the encrypted account information of a suspected terrorist. Apple never gave in, giving their customers no reason to worry about any legal wrongdoings through Apple. Given Amazon’s initial pushback, it’s safe to say that Amazon does not want to give up its customer’s data.
While there’s currently no reason to worry about the police prying open an Amazon Echo to find incriminating information, there’s also no reason lawmakers can’t pass a bill that requires big data handlers to cooperate with government officials.
With all that said, there is still plenty of opportunity for companies like Amazon to get in the way of consumer’s personal desires. Google has their own virtual assistant. Apple’s Siri is also listening. Samsung has a refrigerator that can determine how many eggs are left, when the milk is going bad, and will remind its owner to replenish the fridge on the way home from work. All of these companies are required to provide privacy policies.
Or maybe they’ll continue to swipe as fast as they can until they’ve reached the “I Accept” button. We won’t know until companies respond to the new DMA rules.
While the Echo makes it easier for people to think, choose, and act by using artificial intelligence to suggest relevant products, who knows what there will be tomorrow? All Amazon has to do is change one part of its algorithm, and customers will be urged to buy products that benefit Amazon rather than the individual. This is why it is important to stay vigilant, and for organizations like the DMA to stay ahead of the industry while continue to update their rules and guidelines.