Tuesday, October 20, 2020

Prof. Alexander Wong's work on TinySpeech, a family of compact deep neural networks for on-device speech recognition, was featured in the media, including in Synced and Hackster.io:

Advances in natural language processing (NLP) driven by the BERT language model and transformer models have produced SOTA performances in tasks such as speech recognition and powered a range of applications including voice assistants and real-time closed captioning. The widespread deployment of deep neural networks for on-device speech recognition however remains a challenge, particularly on edge devices such as mobile phones.

In a new paper, researchers from the University of Waterloo and DarwinAI propose novel attention condensers designed to enable the building of low-footprint, highly-efficient deep neural networks for on-device speech recognition on the edge. The team demonstrates low-precision “TinySpeech” deep neural networks comprising such attention condensers and tailored specifically for limited-vocabulary speech recognition.

Click here for the full story.

  1. 2020 (24)
    1. November (1)
    2. October (7)
    3. September (2)
    4. August (2)
    5. July (3)
    6. June (2)
    7. May (2)
    8. April (1)
    9. March (2)
    10. January (2)
  2. 2019 (47)
    1. December (3)
    2. November (3)
    3. October (4)
    4. September (8)
    5. August (2)
    6. July (4)
    7. May (1)
    8. April (3)
    9. March (13)
    10. February (3)
    11. January (3)
  3. 2018 (36)
    1. December (1)
    2. November (8)
    3. October (1)
    4. September (1)
    5. August (5)
    6. July (3)
    7. June (2)
    8. May (2)
    9. April (4)
    10. March (4)
    11. February (3)
    12. January (2)
  4. 2017 (41)
  5. 2016 (33)
  6. 2015 (28)
  7. 2014 (26)
  8. 2013 (30)
  9. 2012 (33)