SYDE Professor's work on TinyML for speech recognition featured in media

Tuesday, October 20, 2020

Prof. Alexander Wong's work on TinySpeech, a family of compact deep neural networks for on-device speech recognition, was featured in the media, including in Synced and Hackster.io:

Advances in natural language processing (NLP) driven by the BERT language model and transformer models have produced SOTA performances in tasks such as speech recognition and powered a range of applications including voice assistants and real-time closed captioning. The widespread deployment of deep neural networks for on-device speech recognition however remains a challenge, particularly on edge devices such as mobile phones.

In a new paper, researchers from the University of Waterloo and DarwinAI propose novel attention condensers designed to enable the building of low-footprint, highly-efficient deep neural networks for on-device speech recognition on the edge. The team demonstrates low-precision “TinySpeech” deep neural networks comprising such attention condensers and tailored specifically for limited-vocabulary speech recognition.

Click here for the full story.