Getting your TV to better understand you

Monday, August 20, 2018

Researchers at the University of Waterloo and the University of Maryland have collaborated with the Comcast Applied AI Research Lab to improve the voice query understanding capabilities of the Comcast Xfinity X1 entertainment platform.

Today, we have become accustomed to talking to intelligent agents that do our bidding — from Siri on a mobile phone to Alexa at home. Why wouldn’t we be able to do the same with TVs? Comcast’s Xfinity X1 does exactly that — the platform comes with a “voice remote” that accepts spoken queries. Your wish is its command — tell your TV to change channels, ask it about free kids’ movies, and even about the weather forecast.

Although the media and technology company had already delivered more than 20 million remotes to customers by the end of 2017 and fielded billions of voice commands, the Comcast team still saw room for improvement in the results returned by one of the company’s most popular products. Even though the device returns remarkably accurate results, thanks to an AI-powered platform and Comcast’s rich trove of entertainment metadata (information like a show’s title, actors and genre), it would still sometimes return odd responses. This was in part due to the type of AI used in the platform, which is based on matching patterns, and doesn’t always correctly interpret user intent.

photo of Jinfeng RaoEnter Jinfeng Rao, a newly-minted PhD at the University of Maryland, who together with his advisor Professor Jimmy Lin at the University of Waterloo and mentor Ferhan Ture, a researcher at the Comcast Applied AI Research Lab, set about tackling the complex problem of understanding voice queries. Their idea was to take advantage of the latest AI technology — a technique known as hierarchical recurrent neural networks — to better model context and improve the system’s accuracy. This research will be presented at the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining from August 19 to 23 in London, United Kingdom.

How does their technique work? Explains Dr. Rao, “Say the viewer asks for ‘Chicago Fire,’ which refers to both a drama series and a soccer team — how does the system determine what you want to watch? What’s special about this approach is that we take advantage of context — such as previously watched shows and favourite channels — to personalize results, significantly increasing accuracy.”

image of intelligent listening TV remote

X1 Voice Remote is a "listening" remote control that allows users to find what they want faster by using voice commands to change channels, search for shows, get recommendations, find out what song is playing on their TV screen and more. 

photo of Ferham TureIn January 2018, the researchers’ new neural network model was deployed in production to answer queries from real live users. “Our previous system was totally confused by a small but consistent percentage of queries,” says Dr. Ture. “The new model handles most of these very difficult queries appropriately, greatly enhancing the user experience.” Today, the model answers millions of user queries per day.

Not content with this success, the researchers have begun to develop an even richer model, which is also outlined in their paper. The intuition is that by analyzing queries from multiple perspectives, the system can better understand what the viewer is saying. This model is being readied for deployment at the moment.

photo of Jimmy LinProfessor Jimmy Lin, the David R. Cheriton Chair at the David R. Cheriton School of Computer Science at the University of Waterloo, sums up this research, “This work is a great example of a successful collaboration between academia and industry that yielded significant real-world impact."

"My research group aims to build intelligent agents that can interact with humans in natural ways, and this project provides a great example of how we can deploy AI technologies to improve the user experience.”


For more information about this research, please see Jinfeng Rao, Ferhan Ture, Jimmy Lin, "Multi-task learning with neural networks for voice query understanding on an entertainment platform," Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, August 19–23, 2018.

  1. 2024 (24)
    1. April (2)
    2. March (13)
    3. February (1)
    4. January (8)
  2. 2023 (70)
    1. December (6)
    2. November (7)
    3. October (7)
    4. September (2)
    5. August (3)
    6. July (7)
    7. June (8)
    8. May (9)
    9. April (6)
    10. March (7)
    11. February (4)
    12. January (4)
  3. 2022 (63)
    1. December (2)
    2. November (7)
    3. October (6)
    4. September (6)
    5. August (1)
    6. July (3)
    7. June (7)
    8. May (8)
    9. April (7)
    10. March (6)
    11. February (6)
    12. January (4)
  4. 2021 (64)
  5. 2020 (73)
  6. 2019 (90)
  7. 2018 (82)
  8. 2017 (51)
  9. 2016 (27)
  10. 2015 (41)
  11. 2014 (32)
  12. 2013 (46)
  13. 2012 (17)
  14. 2011 (20)