Determining the Tested Classes with Software Metrics

Determining the Tested Classes with Software Metrics

Early detection and correction of errors appearing in software projects reduces the risk of exceeding the estimated time and cost. An efficient and effective test plan should be implemented to detect potential errors as early as possible. In the earlier phases, codes can be analyzed by efficiently employing software metric and insight can be gained about error susceptibility and measures can be taken if necessary. It is possible to classify software metric according to the time of collecting data, information used in the measurement, type and interval of the data generated. Considering software metric depending on the type and interval of the data generated, object-oriented software metric is widely used in the literature. There are three main metric sets used for software projects that are developed as object-oriented. These are Chidamber & Kemerer, MOOD and QMOOD metric sets. In this study, an approach for identifying the classes that should primarily be tested has been developed by using the objectoriented software metric. Then, this approach is applied for selected versions of the project developed. According to the results obtained, the correct determination rate of sum of the metrics method, which was developed to identify the classes that should primarily be tested, is ranged between 55% and 68%. In the random selection method, which was used to make comparisons, the correct determination rate for identifying the classes that should primarily be tested is ranged between 9.23% and 11.05%. In the results obtained using sum of the metrics method, a significant rate of improvement is observed compared to the random selection method

___

  • 1. Tiftik, N, Öztarak, H, Ercek, G, Özgün, S, Sistem/Yazılım Geliştirme Sürecinde Doğrulama Faaliyetleri, 3. Ulusal Yazilim Mühendisliği Sempozyumu (UYMS’07), Ankara, 2007.
  • 2. Song, O, Sheppard, M, Cartwright, M, and Mair, C, Software Defect Association Mining and Defect Correction Effort Prediction, IEEE Transactions on Software Engineering, 2006, 32(2), 69-82.
  • 3. Fenton, N, Ohlsson, N, Quantitative Analysis of Faults and Failures in a Complex Software System, IEEE Transactions on Software Engineering, 2000, 26(8), 797-814.
  • 4. Xiaowei, W, The Metric System about Software Maintenance, 2011 International Conference of Information Technology, Computer Engineering and Management Sciences, Wuhan, 2011.
  • 5. Kaur, A, Sandhu, P.S, Brar, A.S, An Empirical Approach for Software Fault Prediction, 5 th International Conference on Industrial and Information Systems, Mangalore, India, 2010, pp 261–265.
  • 6. Raymond, P.L, Weimer, B, Weimer, W.R, Learning a Metric for Code Readability, IEEE Transactions of Software Engineering, 2010, 36(4), 546-558.
  • 7. Ogasawara, H, Yamada, A, Kojo, M, Experiences of Software Quality Management Using Metrics through the Life-Cycle, 18th International Conference on Software Engineering, Berlin, 1996, pp 179– 188.
  • 8. Chaumun, M, Kabaili, H, Keller, R, Lustman, F, Change Impact Model for Changeability Assessment in Object-Oriented Software Systems, Science of Computer Programming, Elsevier, 2002, 45(2-3), 155-174.
  • 9. Lee, Y, Yang, J, Chang, K.H, Metrics and Evolution in Open Source Software, Seventh International Conference on Quality Software (QSIC 2007), IEEE: Portland, OR, 2007, pp 191-197.
  • 10. Kastro, Y, Bener, A.B, A defect prediction method for software versioning, Software Quality Journal, Springer, 2008, 16(4), 543-562.
  • 11. Li, L, Leung, H, Mining Static Code Metrics for a Robust Prediction of Software Defect Proneness, International Symposium on Empirical Software Engineering and Measurement, IEEE: Banff, AB, 2011, pp 207-214
  • 12. NASA Datasets. (accessed 22.08.2014) http://promise.site.uottawa.ca/SERepository/datasets-page.html
  • 13. Efil, İ, Toplam Kalite Yönetimi ve Toplam Kaliteye Ulaşmada Önemli Bir Araç: ISO 9000 Kalite Güvence Sistemi, Bursa: Uludağ Üniversitesi Basımevi, s.29, 1995.
  • 14. Galin, D, Software Quality Assurance: From Theory to Implementation, Addison Wesley, 2004; pp 510-514.
  • 15. Loon, H.V, Process Assessment and ISO/IEC 15504: A Reference Book, Springer, 2nd Edition, 2007
  • 16. Hofmann, H, Yedlin, D.K, Mishler, J, Kushner, S, CMMI for Outsourcing: Guidelines for Software, Systems, and IT Acquisition, Addison-Wesley Professional, 1st Edition, 2007, pp. 2-4.
  • 17. Yücalar, F, Yazılım Ölçümüne Giriş, Maltepe Üniversitesi, Yazılım Mühendisliği Bölümü, Yazılım Ölçütleri Ders Notları, 2013.
  • 18. Arvanitoua, E.M, Ampatzoglou, A, Chatzigeorgiou, A, Avgeriou, P, Software metrics fluctuation: a property for assisting the metric selection process, Information and Software Technology, 2016, 72, 110- 124.
  • 19. Amara, D, Ben Arfa Rabai, L, Towards a New Framework of Software Reliability Measurement Based on Software Metrics, 8th International Conference on Ambient Systems, Networks and Technologies (ANT 2017), Procedia Computer Science, 2017, 109, pp 725- 730.
  • 20. Arar, Ö.F, Ayan, K, Deriving thresholds of software metrics to predict faults on open source software: Replicated case studies, Expert Systems with Applications, 2016, 61, 106-121.
  • 21. Chidamber, S, Kemerer, C, A Metrics Suite for Object-Oriented Design, IEEE Transactions on Software Engineering, 1994, 20(6), pp 476-493.
  • 22. Brito e Abreu, F, Pereira, G, Soursa, P, Coupling-Guided Cluster Analysis Approach to Reengineer the Modularity of Object-Oriented Systems, Conference on Software Maintenance and Reengineering, IEEE: Washington, DC, USA, 2000, pp 13-22.
  • 23. Bansiya, J, Davis, C, A Hierarchical Model for Object-Oriented Design Quality Assessment, IEEE Transactions on Software Engineering, 2002, 28(1), pp 4-17.
  • 24. Erdemir, U, Tekin, U, Buzluca, F, Nesneye Dayalı Yazılım Metrikleri ve Yazılım Kalitesi, Yazılım Kalitesi ve Yazılım Geliştirme Araçları Sempozyumu (YKGS’2008), İstanbul, 2008.