Machine Learning with Nearest Neighbors††††††††††††

December 27,2013 2PM

302-308

Abstract:

In this talk, I will explain contemporary machine learning methods utilizing theoretical properties of nearest neighbors. The theoretical study for nearest neighbor information goes back to T. Cover and P. Hartís work in the 1960s which is based on the asymptotic behavior of nearest neighbor classification with many data. Their best-known contribution is the upper bound of the error in the asymptotic situation, which is twice the Bayes error, as well as the idea of connecting nearest neighbor information to the underlying probability density functions. I will provide several examples about how nearest neighbor information can be better utilized in the theoretical context, as well as the explanation on the related topics of supervised machine learning theories. The explanation is mostly based on my recent work, but papers of leading researchers in this field will be introduced as well, which are appeared in recent NIPS and AISTATS conferences and in several statistics journals.