Speaker: Casba Szepesvari, University of Alberta
It is known that the nonparametric minimax rate for regression in the active and passive settings are the same for various smoothness classes. In this talk, I will look into the simpler problem when the regression domain is finite, but the response variance is location dependent, i.e., the noise is heteroscedastic. This problem, albeit simple, arises in a number of important applications, such as quality assurance (creating sampling plans), or when trying to estimate an unknown mean using stratified sampling. The error of the estimates of a passive learner that samples the points of the domain according to a fixed distribution (say uniformly) converges at the rate C/n^{1/2}, where n is the number of samples. The rate of convergence of an omnipotent active learner who knows the variances (but does not know the means) converges at the rate of C^*/n^{1/2}, where C^* could be much smaller than C. Thus, the rates are the same, and the only difference is in the constants. Is it worthwhile (and possible) to develop an active learning algorithm to match the optimal constant? How can we achieve this? How big is the loss of such an algorithm compared to the performance of the omnipotent active learner? In the talk I will suggest that matching optimal constant does worth the effort (sometimes). Further, I will suggest an algorithm whose performance will be shown to be within C'/n^{3/2} of the performance of the omnipotent learner. Experimental results on several problems, including option pricing, will demonstrate that matching the optimal performance can indeed worth the trouble.
Friday, May 1, 2009 11:30 am
-
11:30 am
EDT (GMT -04:00)