Amazon is once again making headlines with their Alexa virtual assistant – a device loved and relied upon by millions of users. However, the latest news may just leave you reconsidering inviting the device into your home.

The introduction of the Amazon Echo speakers and the corresponding human-sounding virtual assistant Alexa may have seemed a little ‘weird’ at first, inviting this level of technology into our daily lives. Especially for early adopters who were willing to go out on a limb and give the device a try when it was still largely unknown. However, once the many uses of Alexa became apparent, demand for the device sky-rocketed. Today, it’s used for everything from playing music and controlling lights around the home to shopping from Amazon.

As with any technology, there are going to be hiccups or ‘quirks’ that will eventually come to light. For Alexa, the virtual assistant technology certainly didn’t come without some funny stories. This technology is far from perfect, especially if you have children or pets in the home, or if the device is set up near a radio or television. In fact, stories of children and even parrots making orders on the device have begun to circulate social media. If you need a good laugh, you won’t be disappointed!

Recently, however, one user (who chooses to remain anonymous) left a scathing review on Amazon’s website, stating that the device advised him to kill your foster parents”, adding that it was “a whole new level of creepy”. The incident was investigated and found to be true, with a source at Amazon reporting that the device in question was actually quoting from the popular website Reddit. It highlights a concern with the use of this form of artificial intelligence (AI) technology.

The device uses machine learning, a popular form of AI that allows Alexa to respond to our statements and questions as if it truly ‘understands’ the conversation. What is really happening behind the scenes, however, is that the device records and transcribes the words that we speak, and then responds based on knowledge it has previously acquired as well as searching the internet, making an ‘educated guess’ on the best answer. If the answer is wrong and you correct it, it will continue to learn from the experience adding to its databank of information.

This leaves the device open to what Reuters is referring to the ‘Pandora’s Box’ for Amazon. When searching and responding, it could pull its answer to whatever was said from a number of sources. For example, The Washington Post has a licensing deal with Amazon giving the device access to search their articles for an answer. However, it may also pull this information from popular posts on social media, or website like Wikipedia. As such, the company doesn’t always have control over the information that it will then share.

The result? Some owners have reported concerning answers, experiences, and conversations, including graphic descriptions of sexual encounters, eerie laughing for no reason, and more. While the device has been set up with filters to remove profanity and ‘sensitive topics’, the filters certainly aren’t without flaws.

Furthermore, the devices record conversations which can open users up to considerable privacy risks. With many users not even realizing that this is happening behind the scenes, there have been reported incidents in which these recordings have been accessed or even accidentally sent to other people. While these incidents are rare, it is recommended for all users to ensure that they are aware of the potential risks of use.

Feature Image Source: AP | The Washington Post

Leave a Reply