A Meta-Ensemble Classifier Approach: Random Rotation Forest

Ensemble learning is a popular and intensively studied field in machine learning and pattern recognition to increase the performance of the classification. Random forest is very important for giving fast and effective results. On the other hand, Rotation Forest can get better performance than Random Forest. In this study, we present a meta-ensemble classifier, called Random Rotation Forest to utilize and combine the advantages of two classifiers (e.g. Rotation Forest and Random Forest). In the experimental studies, we use three base learners (namely, J48, REPTree, and Random Forest) and two meta-learners (namely, Bagging and Rotation Forest) for ensemble classification on five datasets in UCI Machine Learning Repository. The experimental results indicate that Random Rotation Forest gives promising results according to base learners and bagging ensemble approaches in terms of accuracy rates, AUC, precision, recall, and F-measure values. Our method can be used for image/pattern recognition and machine learning problems.

___

  • T.G. Dietterich, Ensemble methods in machine learning, In International workshop on multiple classifier systems, Springer, Berlin, Heidelberg, 2000, pp. 1-15.
  • W. Feng, W. Bao, Weight-Based Rotation Forest for Hyperspectral Image Classification, IEEE Geoscience and Remote Sensing Letters, 14(11), 2017, pp. 2167-2171.
  • E. Aličković, A. Subasi, Breast cancer diagnosis using GA feature selection and Rotation Forest, Neural Computing and Applications, 28(4), 2017, pp. 753-763.
  • M. Pal, Random forest classifier for remote sensing classification, International Journal of Remote Sensing, 26(1), 2005, pp. 217-222.
  • A. Onan, Sentiment Analysis on Twitter Based on Ensemble of Psychological and Linguistic Feature Sets, Balkan Journal of Electrical and Computer Engineering, 6(2), 2018, pp. 1-9.
  • W. Y. Loh, Classification and regression trees, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 1(1), 2011, pp. 14-23.
  • W.N.H.W. Mohamed, M.N.M. Salleh, A.H. Omar, A comparative study of reduced error pruning method in decision tree algorithms, In Control System, Computing and Engineering (ICCSCE), 2012 IEEE International Conference on, 2012, pp. 392-397.
  • L. Breiman, Random forests, Machine learning, 45(1), 2001, pp. 5-32.
  • J.J. Rodriguez, L.I., Kuncheva, C.J. Alonso, Rotation forest: A new classifier ensemble method, IEEE transactions on pattern analysis and machine intelligence, 28(10), 2006, pp. 1619-1630.
  • Y. Freund, R.E. Schapire, Experiments with a new boosting algorithm, In Icml, 96, 1996, pp. 148-156.
  • K.H. Liu, D.S. Huang, Cancer classification using rotation forest, Computers in biology and medicine, 38(5), 2008, pp. 601-610.
  • C.X. Zhang, J.S. Zhang, RotBoost: A technique for combining Rotation Forest and AdaBoost. Pattern recognition letters, 29(10), 2008, pp. 1524-1536.
  • A. Ozcift, A. Gulten, Classifier ensemble construction with rotation forest to improve medical diagnosis performance of machine learning algorithms, Computer methods and programs in biomedicine, 104(3), 2011, pp. 443-451.
  • P. Du, A. Samat, B. Waske, S. Liu, Z. Li, Random forest and rotation forest for fully polarized SAR image classification using polarimetric and spatial features, ISPRS Journal of Photogrammetry and Remote Sensing, 105, 2015, pp. 38-53.
  • F. Lv, M. Han, Hyperspectral image classification based on improved rotation forest algorithm, Sensors, 18(11), 2018, 3601.
  • A. Bagnall, A. Bostrom, G. Cawley, M. Flynn, J. Large, J. Lines, Is rotation forest the best classifier for problems with continuous features?, 2018, arXiv preprint arXiv:1809.06705.
  • B.K. Singh, K. Verma, A.S. Thoke, Investigations on impact of feature normalization techniques on classifier's performance in breast tumor classification, International Journal of Computer Applications, 116(19), 2015, pp. 11-15.
  • C.L. Devasena, Comparative analysis of random forest, REP tree and J48 classifiers for credit risk prediction, In International Journal of Computer Applications (0975-8887), International Conference on Communication, Computing and Information Technology (ICCCMIT-2014), 2014, pp. 30-36.
  • P. Hamsagayathri, P. Sampath, Decision tree classifiers for classification of breast cancer, Int. J. Curr. Pharm. Res, 9(2), 2017, 31.
  • G. Biau, Analysis of a random forests model, Journal of Machine Learning Research, 13(Apr), 2012, pp. 1063-1095.
  • O. Sagi, L. Rokach, Ensemble learning: A survey, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(4), e1249, 2018, pp. 1-18.
  • C. Zhang, Y. Ma, Ensemble machine learning: methods and applications, Springer Science & Business Media, 2012.
  • G. Louppe, Understanding random forests: From theory to practice, arXiv preprint arXiv:1407.7502, 2014.
  • UCI Machine Learning Repository, 2018, https://archive.ics.uci.edu/ml/datasets.html
  • T. Fawcett, An introduction to ROC analysis, Pattern recognition letters, 27(8), 2006, pp. 861-874.
  • T. Fawcett, ROC graphs: Notes and practical considerations for researchers, Machine learning, 31(1), 2004, pp. 1-38.
  • E. Taşcı, O. Gökalp, A. Uğur, Development of a novel feature weighting method using CMA-ES optimization, In 2018 26th Signal Processing and Communications Applications Conference (SIU), IEEE, 2018, pp. 1-4.