Personal Assistants: Accuracy Scores And Mistakes They Make

Personal Assistants: Accuracy Scores And Mistakes They Make

by  @lauriesullivan, April 24, 2018

Digital personal assistants continually learn ways to respond to voice queries, but some still have work to do. While semantics, accuracy and word recognition count, understanding why they make some of the mistakes they do also can help marketers to optimize content found on their sites and across the web.

The 2018 update to Stone Temple Consulting’s 2017 study on voice assistants tests the same 4,942 queries on five different devices — Google Assistant on Google Home and on a smartphone, along with Amazon Alexa, Microsoft Cortana on Invoke, and Apple Siri — for the percentage of attempts to answer the questions and the correctness in which they do as in the 2017 study. 

Personal Assistants: Accuracy Scores And Mistakes They Make | DeviceDaily.com

As in 2017, Google Assistant on a smartphone still answers the most questions and has the highest percentage of accuracy. Cortana answers the second highest percentage of questions correctly.

Google Assistant on Google Home came in at No. 3 in terms of the percentage of fully and correctly answered questions. Alexa, at No. 4, showed the largest year-over-year improvement, answering 2.7 times more queries this year as compared with last year. Siri came in last in this category for fully and correctly answering the questions, as well as the percentage of answers attempted.

The findings for the 2018 study show all four personal assistants increased the number of attempted answers, but the most dramatic change was in Alexa, which went from 19.8% to 53.0%. Cortana Invoke saw the second-largest increase in attempted answers going from 53.9% to 64.5%, followed by Siri at 31.4% and 40.7%, respectively.

It’s not enough for the personal assistants to try and answer the question, so the study also analyzes the types of mistakes that personal assistants make most often.

Alexa had the most incorrect responses, but she also scaled the number of questions they were responding to the most by far. Siri was on par with Alexa in the number of incorrect answers, up slightly compared with the 2017 results.

Many of the errors for both Alexa and Siri came from poorly structured queries or obscure queries such as “What movies does The Rushmore, New York appear in?” according to the report.

More than one-third of the queries generating incorrect responses in both Alexa and Siri came from obscure queries. All errors were obvious. One question included “who is the voice of Darth Vader?” in which Siri responded with a list of movies including Darth Vader.

Analyst at Stone Temple Consulting also analyzed how each personal assistant supports featured snippets, answered provided by a personal assistant or a search engine sourced from a third party. Google Assistant on a smartphone had the highest number, followed by Google Assistant on Google Home, Cortana Invoke, Siri, and Alexa.

Digital personal assistants have a sense of humor. They all like to tell jokes in response to some questions. Alexa told the most jokes in the 2018 study — taking the lead from Siri, which held the No. 1 spot in 2017.

MediaPost.com: Search Marketing Daily

(32)