Let X be a random element in a metric space (F,d), and let Y be a random variable with value 0 or 1. Y is
called the class, or the label, of X. Let (Xi,Yi)1 ≤ i ≤ n be an observed i.i.d. sample having the same law as (X,Y). The problem of classification is to
predict the label of a new random element X. The k-nearest
neighbor classifier is the simple following rule: look at
the k nearest neighbors of X in the trial sample and choose 0 or 1 for its label
according to the majority vote. When $({\cal F},d)=(\mathbb{R}^d,||.||)$
,
Stone (1977) proved in 1977 the universal consistency of this classifier:
its probability of error converges to the Bayes error, whatever the
distribution of (X,Y). We show in this paper that this result is no
longer valid in general metric spaces. However, if (F,d) is
separable and if some regularity condition is assumed, then the
k-nearest neighbor classifier is weakly consistent.