PhD Defence: Learning Sparse Orthogonal Wavelet FiltersExport this event to calendar

Tuesday, September 25, 2018 — 1:00 PM EDT

Daniel Recoskie, PhD candidate
David R. Cheriton School of Computer Science

The wavelet transform is a well-studied and understood analysis technique used in signal processing. In wavelet analysis, signals are represented by a sum of self-similar wavelet and scaling functions. Typically, the wavelet transform makes use of a fixed set of wavelet functions that are analytically derived. We propose a method for learning wavelet functions directly from data. We impose an orthogonality constraint on the functions so that the learned wavelets can be used to perform both analysis and synthesis. We accomplish this by using gradient descent and leveraging existing automatic differentiation frameworks. Our learned wavelets are able to capture the structure of the data by exploiting sparsity. We show that the learned wavelets have similar structure to traditional wavelets.

Machine learning has proven to be a powerful tool in signal processing and computer vision. Recently, neural networks have become a popular and successful method used to solve a variety of tasks. However, much of the success is not well understood, and the neural network models are often treated as black boxes. This thesis aims to bring some intuition into the structure of neural networks. In particular, we consider the connection between convolutional neural networks and multiresolution analysis. We show that the wavelet transform shares similarities to current convolutional neural network architectures. We hope that viewing neural networks through the lens of multiresolution analysis may provide some useful insights.

We begin the thesis by motivating our method for one-dimensional signals. We then show that we can easily extend the framework to multidimensional signals. Our learning method is evaluated on a variety of supervised and unsupervised tasks, such as image compression and audio classification. The tasks are chosen to compare the usefulness of the learned wavelets to traditional wavelets, as well as provide a comparison to existing neural network architectures. The wavelet transform used in this thesis has some drawbacks and limitations, caused in part by the fact that we make use of separable real filters. We address these shortcomings by exploring an extension of the wavelet transform known as the dual-tree complex wavelet transform. Our wavelet learning model is extended into the dual-tree domain with few modifications, overcoming the limitations of our standard model. With this new model we are able to show that localized, oriented filters arise from natural images.

Location 
DC - William G. Davis Computer Research Centre
2310
200 University Avenue West
Waterloo, ON N2L 3G1
Canada

S M T W T F S
28
29
30
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1
  1. 2018 (28)
    1. October (2)
    2. September (4)
    3. August (3)
    4. July (1)
    5. June (5)
    6. May (4)
    7. April (3)
    8. March (3)
    9. February (1)
    10. January (2)
  2. 2017 (19)
    1. December (1)
    2. November (2)
    3. October (1)
    4. September (1)
    5. August (3)
    6. July (4)
    7. June (3)
    8. May (2)
    9. April (1)
    10. February (1)
  3. 2015 (4)
  4. 2012 (4)
  5. 2011 (27)
  6. 2010 (12)
  7. 2009 (18)
  8. 2008 (15)
  9. 2007 (24)
  10. 2006 (36)
  11. 2005 (13)