Müşteri Kaybı Tahmini için Yeniden Örnekleme ve Topluluk Yöntemleri

Müşteri kayıp analizi; şirketlerin, kendileriyle çalışmayı sonlandırması muhtemel müşterileri tahmin etmek için kullandığı bir müşteri ilişkileri yönetimi analitiğidir. Mevcut müşterileri elde tutmaya yönelik pazarlama çalışmalarının başarısı, ancak olası müşteri kayıplarının önceden doğru bir şekilde belirlenmesiyle mümkündür. Bu nedenle kâr artışına yol açacak, yüksek tahmin kabiliyetli, güçlü modellere sahip olmak çok önemlidir. Kayıp analizi için kullanılan veri kümelerinin dengesiz doğası, makine öğrenimi yöntemlerinin sınıflandırma performansını olumsuz etkilemektedir. Bu çalışma, dengesiz kayıp tahmini üzerinde çapraz doğrulamanın prosedürüyle entegre edilmiş yeniden örnekleme - aşırı ve az örnekleme - ve topluluk öğrenme - bagging, boosting, ve stacking - stratejilerini incelemektedir. Referans noktası olarak alınan Destek Vektör Makinelerinin sonuçlarıyla karşılaştırılan deneysel sonuçlar, topluluk yöntemlerinin tahmin performanslarını iyileştirdiğini göstermektedir. Ayrıca aşırı örneklemenin uygulanması, az örnekleme yaklaşımına kıyasla fark edilebilir bir performans artışı sağlamıştır.

Resampling and Ensemble Strategies for Churn Prediction

Churn analysis is a customer relationship management analytics that companies implement to predict the customers who are likely to terminate doing business with them. The success of marketing efforts to retain the existing customers is possible only if probable churners are correctly specified beforehand. Therefore, having powerful models with high prediction capabilities that lead to a profit growth is crucial. The imbalanced nature of churn datasets negatively effects the classification performance of machine learning methods. This study examines resampling –over- and under-sampling- and ensemble learning –bagging, boosting, and stacking– strategies integrated with the cross-validation procedure on imbalanced churn prediction. The experimental results, which are compared to the results of Support Vector Machines taken as the benchmark, show that ensemble methods improve the prediction performances. Also, applying over-sampling achieves a noticeable performance in comparison with the under-sampling approach.

___

  • J. F. Tanner, M. Ahearne, T. W. Leigh, C. H. Mason, & W. C. Moncrief, “CRM in sales-intensive organizations: A review and future directions?”, Journal of Personal Selling and Sales Management, 25(2), 169–180, 2005.
  • H. Singh & H. V. Samalia, “A business intelligence perspective for churn management”, Procedia - Soc. Behav. Sci., 109, 51–56, 2014.
  • F. F. Reichheld & P. Schefter, “E-Loyalty: Your secret weapon on the web”, Harvard Business Review, 78, 105–113, 2000.
  • J. Lu, “Predicting customer churn in the telecommunications industry –– an application of survival analysis modeling using SAS”, Data Mining Techniques, Retrieved from http://www2.sas.com/proceedings/sugi27/p114-27.pdf, 114–127, 2002.
  • O. Kwon & J. M. Sim, “Effects of data set features on the performances of classification algorithms”, Expert Systems with Applications, 40(5), 1847–1857, 2013.
  • A. Ali, S. M. Shamsuddin, & A. L. Ralescu, “Classification with class imbalance problem: A review”, International Journal of Advances in Soft Computing and Its Applications, 7(3), 176–204, 2015.
  • S. A. Qureshi, A. S. Rehman, A. M. Qamar, A. Kamal, & A. Rehman, “Telecommunication subscribers’ churn prediction model using machine learning”, In 8th International Conference on Digital Information Management, ICDIM 2013, 131–136, 2013.
  • A. Amin, S. Anwar, A. Adnan, M. Nawaz, N. Howard, J. Qadir, & A. Hussain, “Comparing oversampling techniques to handle the class imbalance problem: A customer churn prediction case study”, IEEE Access, 4(Ml), 7940–7957, 2016.
  • A. Aditsania, Adiwijaya, & A. L. Saonard, “Handling imbalanced data in churn prediction using ADASYN and backpropagation algorithm”, In Proceeding - 2017 3rd International Conference on Science in Information Technology: Theory and Application of IT for Education, Industry and Society in Big Data Era, ICSITech 2017, 2018-Janua, 533–536, 2017.
  • H. Faris, “Neighborhood cleaning rules and particle swarm optimization for predicting customer churn behavior in telecom industry”, International Journal of Advanced Science and Technology, 68, 11–22, 2014.
  • M. A. H. Farquad, V. Ravi, & S. B. Raju, “Churn prediction using comprehensible support vector machine: An analytical CRM application”, Applied Soft Computing Journal, 19, 31–40, 2014.
  • U. R. Salunkhe, & S. N. Mali, “A hybrid approach for class imbalance problem in customer churn prediction: A novel extension to under-sampling”, International Journal of Intelligent Systems and Applications, 10(5), 71–81, 2018.
  • H. Li, D. Yang, L. Yang, Y. Lu, & X. Lin, “Supervised massive data analysis for telecommunication customer churn prediction”, Proceedings - 2016 IEEE International Conferences on Big Data and Cloud Computing, BDCloud 2016, Social Computing and Networking, SocialCom 2016 and Sustainable Computing and Communications, SustainCom 2016, 163–169, 2016.
  • P. Li, X. Yu, B. Sun, & J. Huang, “Telecom customer churn prediction based on imbalanced data re-sampling method”, Proceedings of 2013 2nd International Conference on Measurement, Information and Control, ICMIC 2013, 1, 229–233, 2013.
  • W. Verbeke, K. Dejaeger, D. Martens, J. Hur, & B. Baesens, “New insights into churn prediction in the telecommunication sector: A profit driven data mining approach”, European Journal of Operational Research, 218(1), 211–229, 2012.
  • G. Haixiang, L. Yijing, J. Shang, G. Mingyun, H. Yuanyue, & G. Bing, “Learning from class-imbalanced data: Review of methods and applications”, Expert Systems with Applications, 73, 220–239, 2017.
  • X. Yu, S. Guo, J. Guo, & X. Huang, “An extended support vector machine forecasting framework for customer churn in e-commerce”, Expert Systems with Applications, 38(3), 1425–1430, 2011.
  • Y. J. Dong, X. H. Wang, & J. Zhou, “CostBP algorithm and its application in customer churn prediction”, In 5th International Joint Conference on INC, IMS, and IDC - NCM 2009, 794–797, 2009.
  • Y. Xu, “Predicting customer churn with extended one-class support vector machine”, in Proceedings - International Conference on Natural Computation, 97–100, 2012.
  • C. Wang, R. Li, P. Wang, and Z. Chen, “Partition cost-sensitive CART based on customer value for Telecom customer churn prediction”, in 36th Chinese Control Conference (CCC), 5680–5684, 2017.
  • K. W. De Bock, & D. Van Den Poel, “An empirical evaluation of rotation-based ensemble classifiers for customer churn prediction”, Expert Systems with Applications, 38(10), 12293–12301, 2011.
  • M. C. Mozer, R. Wolniewicz, D. B. Grimes, E. Johnson, & H. Kaushansky, “Predicting subscriber dissatisfaction and improving retention in the wireless telecommunications industry”, IEEE Transactions on Neural Networks, 11(3), 690–696, 2000.
  • Y. Xie, X. Li, E. W. T. Ngai, & W. Ying, “Customer churn prediction using improved balanced random forests”, Expert Systems with Applications, 36(3) PART 1, 5445–5449, 2009.
  • Y. Xie, & X. Li, “Churn prediction with linear discriminant boosting algorithm”, In Proceedings of the 7th International Conference on Machine Learning and Cybernetics- ICMLC, 1, 228–233, 2008.
  • A. Idris, A. Iftikhar, & Z. ur Rehman, “Intelligent churn prediction for telecom using GP-AdaBoost learning and PSO undersampling”, Cluster Computing, 22(s3), 7241–7255, 2019.
  • J. Xiao, C. He, B. Zhu, & G. Teng, “One-step classifier ensemble model for customer churn prediction with imbalanced class”, In J. Xu, S. Nickel, V. C. Machado, & A. Hajiyev (Eds.), Proc. of the Eightth International Conference on Management Science and Engineering Management, 281, 843–854, 2014.
  • Y. Wang, & J. Xiao, “Transfer ensemble model for customer churn prediction with imbalanced class distribution”, In 2011 International Conference of Information Technology, Computer Engineering and Management Sciences- ICM 2011, 3, 177–181, 2011.
  • A. Hanif, & N. Azhar, “Resolving class imbalance and feature selection in customer churn dataset”, In 2017 International Conference on Frontiers of Information Technology - FIT 2017, (2017-Janua), 82–86, 2018.
  • C. Gui, “Analysis of imbalanced data set problem: The case of churn prediction for telecommunication”, Artificial Intelligence Research, 6(2), 93–99, 2017.
  • B. Zhu, B. Baesens, & S. K. L. M. vanden Broucke, “An empirical comparison of techniques for the class imbalance problem in churn prediction”, Information Sciences, 408, 84–99, 2017.
  • V. Effendy, K. Adiwijaya, & A. Baizal, “Handling imbalanced data in customer churn prediction using combined sampling and weighted random forest”, In 2nd International Conference on Information and Communication Technology - ICoICT 2014, 325–330, 2014.
  • J. Burez, & D. Van den Poel, “Handling class imbalance in customer churn prediction”, Expert Systems with Applications, 36(3 PART 1), 4626–4636, 2009.
  • N. Liu, W. L. Woon, Z. Aung, & A. Afshari, “Handling class imbalance in customer behavior prediction”, In International Conference on Collaboration Technologies and Systems - CTS 2014, 100–103, 2014.
  • M. Ahmed, H. Afzal, I. Siddiqi, M. F. Amjad, & K. Khurshid, “Exploring nested ensemble learners using overproduction and choose approach for churn prediction in telecom industry”, Neural Computing and Applications, 32(8), 3237–3251, 2020.
  • A. Amin, F. Al-Obeidat, B. Shah, M. Al Tae, C. Khan, H. Ur Rehman Durrani, & S. Anwar, “Just-in-time customer churn prediction in the telecommunication sector”, Journal of Supercomputing, 1–25, 2017.
  • Y. P. Zhang, L. N. Zhang, & Y. C. Wang, “Cluster-based majority under-sampling approaches for class imbalance learning”, In IEEE International Conference on Information and Financial Engineering -ICIFE 2010, 400–404, 2010.
  • Z. Zheng, Y. Cai, & Y. Li, “Oversampling method for imbalanced classification”, Computing and Informatics, 34(5), 1017–1037, 2015.
  • S. J. Yen, & Y. S. Lee, “Cluster-based under-sampling approaches for imbalanced data distributions”, Expert Systems with Applications, 36, 5718–5727, 2009.
  • L. Breiman, “Bagging Predictors”, Machine Learning, 24(421), 123–140, 1996.
  • C. D. Sutton, “Classification and regression trees, bagging, and boosting”, Handbook of Statistics, 24, 303–329, 2005.
  • N. S. Yanofsky, “Probably approximately correct: nature’s algorithms for learning and prospering in a complex world”, Common Knowledge, 21(2), 340–340, 2015.
  • Y. Freund, & R. E. Schapire, “Experiments with a new boosting algorithm”, In Proceedings of the 13th International Conference on Machine Learning, 1–9, 1996.
  • S. L. Salzberg, “C4.5: Programs for machine learning by J.Ross Quinlan. Morgan Kaufmann Publishers inc. 1993”, Machine Learning, 16, 235–240, 1994.
  • J. H. Friedman, “Stochastic gradient boosting”, Computational Statistics and Data Analysis, 38(4), 367–378, 2002.
  • K. M. Ting, & I. H. Witten, “Issues in stacked generalization”, Journal of Artificial Intelligence Research, 10, 271–289, 1999.
  • B. Zhu, B. Baesens, A. Backiel, & S. K. L. M. Vanden Broucke, “Benchmarking sampling techniques for imbalance learning in churn prediction”, Journal of the Operational Research Society, 69(1), 49–65, 2018.
  • R. Blagus, & L. Lusa, “Joint use of over-and under-sampling techniques and cross-validation for the development and assessment of prediction models”, BMC Bioinformatics, 16(1), 1–10, 2015.
  • M. S. Santos, J. P. Soares, P. H. Abreu, H. Araujo, & J. Santos, “Cross-validation for imbalanced datasets: Avoiding overoptimistic and overfitting approaches [Research Frontier]”, IEEE Computational Intelligence Magazine, 13(4), 59–76, 2018.
  • G. M. Weiss, & F. Provost, “Learning when training data are costly: The effect of class distribution on tree induction”, Journal of Artificial Intelligence Research, 19, 315–354, 2003.