We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
An analysis of diversity measures.
- Authors
E. Tang; P. Suganthan; X. Yao
- Abstract
<span class="AbstractHeading">Abstract </span>Diversity among the base classifiers is deemed to be important when constructing a classifier ensemble. Numerous algorithms have been proposed to construct a good classifier ensemble by seeking both the accuracy of the base classifiers and the diversity among them. However, there is no generally accepted definition of diversity, and measuring the diversity explicitly is very difficult. Although researchers have designed several experimental studies to compare different diversity measures, usually confusing results were observed. In this paper, we present a theoretical analysis on six existing diversity measures (namely disagreement measure, double fault measure, KW variance, inter-rater agreement, generalized diversity and measure of difficulty), show underlying relationships between them, and relate them to the concept of margin, which is more explicitly related to the success of ensemble learning algorithms. We illustrate why confusing experimental results were observed and show that the discussed diversity measures are naturally ineffective. Our analysis provides a deeper understanding of the concept of diversity, and hence can help design better ensemble learning algorithms.
- Publication
Machine Learning, 2006, Vol 65, Issue 1, p247
- ISSN
0885-6125
- Publication type
Article
- DOI
10.1007/s10994-006-9449-2