Skip to main content
eScholarship
Open Access Publications from the University of California

On the link between error correlation and error reduction in decision tree ensembles

Abstract

Recent work has shown that learning an ensemble consisting of multiple models and then making classifications by combining the classifications of the models often leads to more accurate classifications then those based on a single model learned from the same data. However, the amount of error reduction achieved varies from data set to data set. This paper provides empirical evidence that there is a linear relationship between the degree of error reduction and the degree to which patterns of errors made by individual models are uncorrelated. Ensemble error rate is most reduced in ensembles whose constituents make individual errors in a less correlated manner. The second result of the work is that some of the greatest error reductions occur on domains for which many ties in information gain occur during learning. The third result is that ensembles consisting of models that make errors in a dependent but "negatively correlated" manner will have lower ensemble error rates than ensembles whose constituents make errors in an uncorrelated manner. Previous work has aimed at learning models that make errors in a uncorrelated manner rather than those that make errors in an "negatively correlated" manner. Taken together, these results help provide an understanding of why the multiple models approach yields great error reduction in some domains but little in others.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View