next up previous
Next: Estimating Continuous Probabilities Up: Estimating Probabilities from Data Previous: The Uncertainty in the

Application to Validation and Testing of a Classifier

The most important application of the above from the point of view of this course is the estimation of the error rate of a classifier. This is sometimes called validating or testing the classifier or measuring its generalization performance.

Suppose we have a classifier which has been developed using learning techniques or any other method. We can view the error-rate as the probability that the classifier will misclassify a pattern. To estimate this, the classifier is applied to a set of test patterns; the fraction of the samples misclassified is used as the estimate of the error rate,

\begin{displaymath}\hat{p} = \frac{k}{n}, \end{displaymath}

where $k$ is the number misclassified in the sample, and $n$ is the number of patterns in the sample. If data was used to develop the classifier, it is most important that a different set of patterns be used to test it.

We can see from the figure that if $n$ is not large the true error rate can be fairly different from the measured one. For example, if the test set contains 50 patterns and the classifier correctly classifies all of them, at the 95% confidence level the true error could lie between 0 and eight percent. Such reasoning can help you determine how much test data you will need. For example, suppose you are required to produce a classifier which is correct 97% of the time. You will require sample size of about 250, so that if you can get a classifier to misclassify less than one percent of the sample, the true error is likely to be less than three percent. (How to make such a classifier is another problem.)


next up previous
Next: Estimating Continuous Probabilities Up: Estimating Probabilities from Data Previous: The Uncertainty in the
Jon Shapiro
1999-09-23