We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
Multiple classifier integration for the prediction of protein structural classes.
- Authors
LEI CHEN; LIN LU; KAIRUI FENG; WENJIN LI; JIE SONG; LULU ZHENG; YOULANG YUAN; ZHENBIN ZENG; KAIYAN FENG; WENCONG LU; YUDONG CAI
- Abstract
Supervised classifiers, such as artificial neural network, partition trees, and support vector machines, are often used for the prediction and analysis of biological data. However, choosing an appropriate classifier is not straightforward because each classifier has its own strengths and weaknesses, and each biological dataset has its own characteristics. By integrating many classifiers together, people can avoid the dilemma of choosing an individual classifier out of many to achieve an optimized classification results (Rahman et al., Multiple Classifier Combination for Character Recognition: Revisiting the Majority Voting System and Its Variation, Springer, Berlin, 2002, 167–178). The classification algorithms come from Weka (Witten and Frank, Data Mining: Practical Machine Learning Tools and Techniques, Morgan Kaufmann, San Francisco, 2005) (a collection of software tools for machine learning algorithms). By integrating many predictors (classifiers) together through simple voting, the correct prediction (classification) rates are 65.21% and 65.63% for a basic training dataset and an independent test set, respectively. These results are better than any single machine learning algorithm collected in Weka when exactly the same data are used. Furthermore, we introduce an integration strategy which takes care of both classifier weightings and classifier redundancy. A feature selection strategy, called minimum redundancy maximum relevance (mRMR), is transferred into algorithm selection to deal with classifier redundancy in this research, and the weightings are based on the performance of each classifier. The best classification results are obtained when 11 algorithms are selected by mRMR method, and integrated together through majority votes with weightings. As a result, the prediction correct rates are 68.56% and 69.29% for the basic training dataset and the independent test dataset, respectively. The web-server is available at <URL>http://chemdata.shu.edu.cn/protein_st/</URL>. © 2009 Wiley Periodicals, Inc. J Comput Chem, 2009
- Subjects
ARTIFICIAL neural networks; SUPPORT vector machines; ALGORITHMS; MACHINE learning; PROTEINS; AMINO acids
- Publication
Journal of Computational Chemistry, 2009, Vol 30, Issue 14, p2248
- ISSN
0192-8651
- Publication type
Article
- DOI
10.1002/jcc.21230