Investigation of School Effects on Student Achievement in Primary Education Using Value-Added Assessment

Investigation of School Effects on Student Achievement in Primary Education Using Value-Added Assessment

Purpose: The purpose of this study is to assess the contribution of primary schools within the metropolitan municipality of the province of Ankara, Turkey to the achievement outcomes of 7th grade students using the results of the end of year Level Determination Exam. Research Methods: Carried out using a casual comparative study framework, the population of this study consists of the primary schools in the province of Ankara. The sample includes the 24 primary schools whose students have to the same school in both the 2007-2008 and 2008-2009 educational years within the territorial jurisdiction of the metropolitan municipality of capital of Turkey. The value-added effects of the schools in the sample on the student growth are assessed using a simple fixed-effect model. Moreover, in order to determine whether or not there exists a statistically significant relationship between the rankings of the schools according to average student achievement levels and the rankings according to the value-added effects on student growth, Kendall tau rank correlation coefficients are calculated. Findings: The results of this study indicate that there are significant inconsistencies between the rankings of the schools according to their value-added effects on student improvement and the rankings according to the average student achievement, the latter being the method frequently used to assess the performance of the schools in Turkey. Moreover, the results demonstrate that the value-added effects of the schools on student improvement differ drastically from subject to subject. Implications for Research and Practice: It is expected that this research will lead to a more balanced evaluation of schools particularly given the likely emergence of more data over the years. In addition, this is the first value added assessment study carried out in Turkey. It points out that the way Turkish schools are assessed is problematic and suggests that value added methods should be considered in evaluating the effects of schools.

___

  • Allison, P. D. (1990). Change scores as dependent variables in regression analysis. Sociological Methodology, 20, 93-114.
  • Ballou, D., Sanders, W., & Wright, P. (2004). Controlling for student background in value-added assessment of teachers. Journal of Educational and Behavioral Statistics, 29 (1), 37-65.
  • Başaran, İ., & Çınkır. Ş. (2013). Türkiye Eğitim Sistemi ve okul yönetimi. Ankara: Ekinoks Yayınları.
  • Batyra, A. (2017). Gender gaps in student achievement in Turkey.
  • Berberoğlu, G. (2006). Sınıf İçi ölçme değerlendirme teknikleri. İstanbul: Morpa.
  • Berberoğlu, G., & Kalender, İ. (2005). Öğrenci başarısının, yıllara, okul türlerine, bölgelere göre incelenmesi: ÖSS ve PISA analizi. Eğitim Bilimleri ve Uygulama, 22, 21-35.
  • Bryk, A., & Weisberg, H. (1976). Value-added analysis: A dynamic approach to the estimation of treatment effects. Journal of Educational Statistics, 1, 127-155.
  • Ciftci, A. (2006). PISA 2003 Matematik alt testi sonuçlarına göre Türkiye’deki öğrencilerin başarılarını etkileyen bazı faktörlerin incelenmesi. Yayımlanmamış Yüksek Lisans Tezi, Hacettepe Üniversitesi, Sosyal Bilimler Enstitüsü, Ankara.
  • Demir, İ., Kılıç, S., & Depren, Ö. (2009). Factors affecting Turkish students’ achievement in Mathematics. US-China Education Review, 6 (6), 47-53.
  • EFF. (2008). School directors' handbook. Evergreen Freedom Foundation. Web: http://www.effwa.org/pdfs/Value-Added.pdf adresinden 11 Eylül 2008’de alınmıştır.
  • Ehlert, M., Koedel, C., Parsons, E., & Podgursky, M. (2013). Selecting growth measures for school and teacher evaluations: Should propor- tionality matter? National Center for Analysis of Longitudinal Data in Education Research, 21, 1–33. [1,5,7]
  • Ellett, C. D., & Teddlie, C. (2003). Teacher evaluation, teacher effectiveness and school effectiveness: Perspectives from the USA. Journal of Personnel Evaluation in Education, 17(1), 101-128.
  • Everson, K. C. (2017). Value-added modeling and educational accountability: Are we answering the real questions? Review of Educational Research, 87(1), 35-70.
  • Gray, J., & Jesson, D. (1987). Exam results and local authority league tables. In A. Harrison & J. Gretton (ed.), Examination and training UK, (pp. 33-41).
  • Gray, J., Jesson, D., & Jones, B. (1984). Predicting differences in examination results between local education authorities: Does school organisation matter?. Oxford Review of Education 10(1) 45-68.
  • Gray, J., Jesson, D., &Jones, B. (1986). The search for a fairer way of comparing schools’ examination results. Research Papers in Education, 1(2) 91-122.
  • Haertel, E. H., (2013) Reliability and validity of inferences about teachers based on student test scores. ETS Research and Development Princeton, NJ 08541-0001
  • Hanushek, E. (1971). Teacher characteristics and gains in student achievement: Estimation using micro-data. American Economic Review, 61(2), 280-288.
  • Hanushek, E. (1972). Education and race. Lexington, MA: D.C. Heath and Company.
  • Hanushek, E., & Rivkin, S. G. (2010). Generalizations about using value-added measures of teacher quality. American Economic Review, 100(2), 267-271.
  • Hershberg, T. (2004) Operation public education. Retrieved from http://www.cgp. upenn.edu/ope_value.html last access: September 11, 2008.
  • Hershberg, T. (2008). An overview of value-added assessment. Web: http://www.cgp.upenn.edu/pdf/Value-Added%20for%20Web.pdf last access September 11, 2008.
  • Ishii, J., & Rivkin, S. G. (2009). Impediments to the estimation of teacher value added. Education Finance and Policy, 4(4), 520-536.
  • Kain, J. F. (1998). The impact of individual teachers and peers on individual student achievement. New York: paper presented at the Association for Public Policy Analysis and Management 20th Annual Reseach Conference.
  • Kane, T. J., & Staiger, D. O. (2008). Estimating teacher impacts on student achievement: An experimental evaluation. National Bureau of Economic Research Working Paper 14607.
  • Kurtz, M. D. (2018). Value-added and student growth percentile models: What drives differences in estimated classroom effects?. Statistics and Public Policy, 5(1), 1-8.
  • Lord, F. M. (1969). Statistical adjustments when comparing preexisting groups. Psychological Bulletin, 72 (5), 336-337.
  • Loeb, S., Soland, J., & Fox, L. (2014). Is a good teacher a good teacher for all? Comparing value-added of teachers with their english learners and Non-English learners. Educational Evaluation and Pol- icy Analysis, 36, 457–475. [1,4]
  • Maxwell, W. S. (1987). Teachers’ attitudes towards disruptive behaviour in secondary schools. Educational Review, 39(3), 203-216.
  • McCaffrey, D. F., & Hamilton, L. S. (2007). Value-added assessment in practice: Lessons from the Pennsylvania value-added assessment system pilot project. RAND Corporation.
  • McCaffrey, D. F., Lockwood, J. R., Koretz, D. M., & Hamilton, L. S. (2003). Evaluating value-added models for teacher accountability. RAND Corporation.
  • MoNE. (2003). TIMSS 1999 Üçüncü Uluslar Arası Matematik ve Fen Bilgisi Çalışması Ulusal Rapor. Ankara: T.C Millî Eğitim Bakanlığı Eğitimi Araştırma ve Geliştirme Dairesi Başkanlığı.
  • MoNE. (2010). PISA 2006 Projesi Ulusal Nihaî Rapor. Ankara: T.C Millî Eğitim Bakanlığı Eğitimi Araştırma ve Geliştirme Dairesi Başkanlığı.
  • Murnane, R. J. (1975). The impact of school resources on the learning of inner city children. Cambridge, MA: Ballinger Publishing Co.
  • Rivers, J. C. (1999). The impact of teacher effect on student Math competency achievements. Knoxville: University of Tennessee.
  • Rivkin, S. G., Hanushek, E. A. & Kain, J. F. (2000). Teachers, schools, and academic achievement. Cambridge, MA: National Bureau of Economic Research, NBER Working Paper #W6691.
  • Rogosa, D. R. (1995). Myths and methods: “Myths about longitudinal research" plus supplemental questions. In J. M. Gottman (Ed.), The analysis of change (pp. 3-66). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Rowan, B., Correnti, R., & Miller, R. J. (2002). What large-scale survey research tells us about teacher effects on student achievement: Insights from the "prospects" study of elementary schools. Teachers College Record, 104, 1525-1567.
  • Sanders, W. L. (1997). Graphical summary of educational findings from the Tennessee Value-Added Assessment System (TVAAS). Knoxville: University of Tennessee Value-Added Research and Assessment Center.
  • Sanders, W. L., & Horn, S. P. (1994). The Tennessee Value-Added Assessment System (TVAAS): Mixed-model methodology in educational assessment. Journal of Personnel Evaluation in Education, 8, 299-311.
  • Sanders, W., & Horn, S. (1998). Research findings from the Tennessee Value-Added Assessment System (TVAAS) Database: Implications for educational evaluation and research. Journal of Personnel Evaluation in Education, 12 (3), 247-256.
  • Sanders, W., & Rivers, J. C. (1996). Cumulative and Residual Effects of Teachers on Future Student Academic Achievement. Knoxville, TN: University of Tennessee Value-Added Research Center.
  • Sanders, W., Saxton, A., & Horn, B. (1997). The Tennessee Value-Added Assessment System: A Quantitative Outcomes-based Approach to Educational Assessment. In M. J (Dü.), Grading teachers, grading schools: Is student achievement a valid evaluational measure? (pp. 137-162). Thousand Oaks, CA: Corwin Press Inc.
  • School Directors' Handbook. (2008). Evergreen Freedom Foundation: 11 09, 2008 Retrieved from http://www.effwa.org/pdfs/Value-Added.pdf
  • Tekwe, C. D., Carter, R. L., Ma, C.-X., Algina, J., Lucas, Roth, J., Ariet M., Fisher, T., & Resnick, M. B. (2004). An empirical comparison of statistical models for value-added assessment of school performance. Journal of Educational and Behavioral Statistics, 29, 11-36.
  • Thum, Y. M. (2001). Measuring progress towards a goal: Estimating teacher productivity using a multivariate multilevel model for value-added analysis. Sociological Methods and Research, 32, 153-207.
  • US Department of Education (2009). Race to the top fund: Final rule. Web: http://edocket.access.gpo.gov/2009/pdf/E9-27426.pdf last access August 21, 2010.
  • Webster, W., & Medro, R. (1997). The Dallas Value-Added Accountability System. In J. Millman (Dü.), Grading teachers, grading schools: Is student achievement a valid evaluation measure? (pp. 81-99). Thousand Oaks, CA: Corwin Press, Inc.
  • Wikipedia. (2008). No Child Left Behind. 11 09, 2008 retrieved from http://en.wikipedia.org/wiki/NCLB
  • Willms, J. D. (1987). Difference between Scottish educational authorities in their educational attainment. Oxford Review of Education 13(2), 211-232.
  • Woodhourse, G., & Goldstein, P. (1988). Educational performance indicators and LEA league tables. Oxford Review of Education, 14(3), 301-320.
  • Wright, S. P., Horn, S. P., & Sanders, W. L. (1997). Teacher and classroom context effects on student achievement: Implications for teacher evaluation. Journal of Personnel Evaluation in Education, 11 (1), 57-67.