Master's Thesis Presentation: Naive Bayes Data Complexity and Characterization of Optima of the Unsupervised Expected Likelihood

Wednesday, July 26, 2017 10:00 am - 10:00 am EDT (GMT -04:00)

Speaker: Ali Wytsma, Master's Candidate

The naive Bayes model is a simple model that has been used for many decades, often as a baseline, for both supervised and unsupervised learning. With a latent class variable it is one of the simplest latent variable models, and is often used for clustering. The estimation of its parameters by maximum likelihood (e.g., gradient ascent, expectation maximization) is subject to local optima since the objective is non-concave. However, the conditions under which global optimality can be guaranteed are currently unknown.

 I provide a first characterization of the optima of the naive Bayes model. For problems with up to three features, I describe comprehensive conditions that ensure global optimality. For more than three features, I show that all stationary points exhibit marginal distributions with respect to the features that match those of the training data. In a second line of work, I consider the naive Bayes model with an observed class variable, which is often used for classification. Well-known results provide some upper bounds on the sample complexity for agnostic PAC learning, however exact bounds are unknown. These bounds would show exactly how much data is needed for model training. I detail the framework for determining an exact tight bound on sample complexity, and prove some of the sub-theorems that this framework rests on. I also provide some insight into the nature of the distributions that are hardest to model within specified accuracy parameters.