PhD Defence • Machine Learning (Artificial intelligence) — Theoretical Foundations for Efficient ClusteringExport this event to calendar

Wednesday, May 29, 2019 — 11:30 AM EDT

Shrinu Kushagra, PhD candidate
David R. Cheriton School of Computer Science

Clustering aims to group together data instances which are similar while simultaneously separating the dissimilar instances. The task of clustering is challenging due to many factors. The most well-studied is the high computational cost. The clustering task can be viewed as an optimization problem where the goal is to minimize a certain cost function (like k-means cost or k-median cost). Not only are the minimization problems NP-Hard but often also NP-Hard to approximate (within a constant factor). There are two other major issues in clustering, namely under-specificity and noise-robustness. 

The focus of this thesis is tackling these two issues while simultaneously ensuring low computational cost. Clustering is an under-specified task. The same dataset may need to be clustered in different ways depending upon the intended application. Different solution requirements need different approaches. In such situations, domain knowledge is needed to better define the clustering problem. We incorporate this by allowing the clustering algorithm to interact with an oracle by asking whether two points belong to the same or different cluster. 

In a preliminary work, we show that access to a small number of same-cluster queries makes an otherwise NP-Hard k-means clustering problem computationally tractable. Next, we consider the problem of clustering for data de-duplication; detecting records which correspond to the same physical entity in a database. We propose a correlation clustering like framework to model such record de-duplication problems. We show that access to a small number of same-cluster queries can help us solve the ‘restricted’ version of correlation clustering. Rather surprisingly, more relaxed versions of correlation clustering are intractable even when allowed to make a ‘large’ number of same-cluster queries. Next, we explore the issue of noise-robustness of clustering algorithms. Many real-world datasets, have on top of cohesive subsets, a significant amount of points which are ‘unstructured’. The addition of these noisy points makes it difficult to detect the structure of the remaining points. 

In the first line of work, we define noise as not having significantly large dense subsets. We provide computationally efficient clustering algorithms that capture all meaningful clusterings of the dataset; where the clusters are cohesive (defined formally by notions of clusterability) and where the noise satisfies the gray background assumption. We complement our results by showing that when either the notions of structure or the noise requirements are relaxed, no such results are possible. In the second line of work, we develop a generic procedure that can transform objective-based clustering algorithms into one that is robust to outliers (as long the number of such points is not ‘too large’). In particular, we develop efficient noise-robust versions of two common clustering algorithms and prove robustness guarantees for them.

Location 
DC - William G. Davis Computer Research Centre
2310
200 University Avenue West

Waterloo, ON N2L 3G1
Canada

S M T W T F S
30
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1
2
3
  1. 2019 (160)
    1. September (2)
    2. August (4)
    3. July (11)
    4. June (23)
    5. May (23)
    6. April (32)
    7. March (25)
    8. February (16)
    9. January (24)
  2. 2018 (221)
    1. December (16)
    2. November (19)
    3. October (26)
    4. September (23)
    5. August (17)
    6. July (20)
    7. June (13)
    8. May (25)
    9. April (34)
    10. March (24)
    11. February (3)
    12. January (1)
  3. 2017 (36)
  4. 2016 (21)
  5. 2015 (36)
  6. 2014 (33)
  7. 2013 (23)
  8. 2012 (4)
  9. 2011 (1)
  10. 2010 (1)
  11. 2009 (1)
  12. 2008 (1)