Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2021, Cilt: 4 Sayı: 4 - ICETOL Special Issue, 835 - 853, 31.12.2021
https://doi.org/10.31681/jetol.987902

Öz

Kaynakça

  • Baartman, L. K., Bastiaens, T. J., Kirschner, P. A., & Van der Vleuten, C. P. (2007). Teachers’ opinions on quality criteria for Competency Assessment Programs. Teaching and Teacher Education, 23(6), 857-867. DOI: 10.1016/j.tate.2006.04.043.
  • Black, P. (1998). Testing: friend or foe? Theory and Practice of Asessment and Testing. Maidenhead, London: Falmer Press.
  • Black, P., Harrison, C., & Lee, C. (2003). Assessment for Learning: Putting It Into Practice. UK: Open University Press.
  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. DOI: 10.1191/1478088706qp063oa.
  • Brennan, R. L. (2001). Generalizability Theory. New York: Springer. DOI: 10.1007/978-1-4757-3456-0.
  • Brown, G., Glasswell, K., & Harland, D. (2004). Accuracy in scoring of writing: Studies of reliability and validity using a New Zealand writing assessment system. Assessing Writing, 9, 105–121. DOI: 10.1016/j.asw.2004.07.001.
  • Chang, C. C., Tseng, K. H., Chou, P. N. & Chen, Y. H. (2011). Reliability and validity of Web-based portfolio peer assessment: A case study for a senior high school’s students taking computer course. Computers and Education, 57, 1306–16. DOI: 10.1016/j.compedu.2011.01.014.
  • Cheng, K. H., Liang, J. C., & Tsai, C. C. (2015). Examining the role of feedback messages in undergraduate students’ writing performance during an online peer assessment activity. The Internet and Higher Education, 25, 78–84. DOI: 10.1016/j.iheduc.2015.02.001.
  • Cho, K., Schunn, C. D., & Wilson, R. W. (2006). Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives. Journal of Educational Psychology, 98(4), 891. DOI: 10.1037/0022-0663.98.4.891.
  • Creswell, J. W. (2014). A Concise Introduction to Mixed Methods Research. USA: Sage Publications Inc.
  • Creswell, J. W., & Clark, V. L. P. (2017). Designing and Conducting Mixed Methods Research. USA: Sage publications Inc.
  • Crocker, L., & Algina, J. (1986). Introduction to Classical and Modern Test Theory. USA: Thomson Learning.
  • Dunbar, S. B., Koretz, D. M., & Hoover, H. D. (1991). Quality control in the development and use of performance assessment. Applied Measurement in Education, 4, 289-303. DOI: 10.1207/s15324818ame0404_3.
  • Falchikov, N., & Boud, D. (1989). Student self-assessment in Higher Education: A meta-analysis. Review of Educational Research, 59(4), 395–430. DOI: 10.3102/00346543059004395.
  • Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in Higher Education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70, 287–322. DOI: 10.3102/00346543070003287.
  • Gearhart, M., Herman, J. L., Novak, J. R., & Wolf, S. A. (1995). Toward the instructional utility of large-scale writing assessment: Validation of a new narrative rubric. Assessing Writing, 2, 207-242. DOI: 10.1016/1075-2935(95)90013-6.
  • Iglesias Pérez, M. C., Vidal-Puga, J., & Pino Juste, M. R. (2020). The role of self and peer assessment in Higher Education. Studies in Higher Education, 1-10. DOI: 10.1080/03075079.2020.1783526.
  • James, N., & Busher, H. (2006). Credibility, authenticity and voice: Dilemmas in online interviewing. Qualitative Research, 6(3), 403-420. DOI: 10.1177/1468794106065010.
  • Jones, I., & Alcock, L. (2014). Peer assessment without assessment criteria. Studies in Higher Education, 39(10), 1774-1787. DOI: 10.1080/03075079.2013.821974.
  • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130-144. DOI: 10.1016/j.edurev.2007.05.002.
  • Lemlech, J. K. (1995). Becoming a Professional Leader. New York: Scholastic Inc.
  • Li, H., Xiong, Y., Hunter, C. V., Guo, X., & Tywoniw, R. (2020). Does peer assessment promote student learning? A meta-analysis. Assessment & Evaluation in Higher Education, 45(2), 193-211. DOI: 10.1080/02602938.2019.1620679.
  • Lu, J., & Law, N. (2012). Online peer assessment: Effects of cognitive and affective feedback. Instructional Science, 40(2), 257-275. DOI: 10.1007/s11251-011-9177-2.
  • Mann, C. & Stewart, F. (2000) Internet Communication and Qualitative Research. A Handbook for Research Online. London: Sage.
  • McConlogue, T. (2015). Making Judgements: Investigating the process of composing and receiving peer feedback. Studies in Higher Education, 40(9), 1495–1506. DOI: 10.1080/03075079.2013.868878.
  • Moskal, B. M., & Leydens, J. A. (2000). Scoring rubric development: Validity and reliability. Practical Assessment, Research & Evaluation, 7, 71-81. DOI: 10.7275/q7rm-gg74.
  • Nelson, M. M., & Schunn, C. D. (2009). The nature of feedback: How different types of peer feedback affect writing performance. Instructional Science, 37(4), 375–401. DOI: 10.1007/s11251-008-9053-x.
  • Nicol, D. J., & MacFarlane-Dick, D. (2006). Formative and self-regulated: a model and seven principles of good feedback practice. Studies in Higher Education, 31(20), 199-218. DOI: 10.1080/03075070600572090.
  • Ohaja, M., Dunlea, M., & Muldoon, K. (2013). Group marking and peer assessment during a group poster presentation: The experiences and views of midwifery students. Nurse Education in Practice, 13(5), 466-470. DOI: 10.1016/j.nepr.2012.11.005.
  • Orsmond, P., Merry, S., & Reiling, K. (1996). The importance of marking criteria in the use of peer assessment. Assessment and Evaluation in Higher Education, 21(3), 239–250. DOI: 10.1080/0260293960210304.
  • Popham, W. J. (1997). What’s wrong-and what’s right- with rubrics. Educational Leadership, 55(2), 72-75. Retrieved from http://www.ascd.org/publications/educational-leadership/oct97/vol55/num02/What’s-Wrong%E2%80%94and-What’s-Right%E2%80%94with-Rubrics.aspx.
  • Reinholz, D. (2016). The assessment cycle: A model for learning through peer assessment. Assessment & Evaluation in Higher Education, 41(2), 301-315. DOI: 10.1080/02602938.2015.1008982.
  • Reuse-Durham, N. (2005). Peer evaluation as an active learning technique. Journal of Instructional Psychology, 32(4), 328–345. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.522.838&rep=rep1&type=pdf.
  • Roscoe, R. D., & Chi, M. T. (2007). Understanding tutor learning: Knowledge-building and knowledge-telling in peer tutors’ explanations and questions. Review of Educational Research, 77(4), 534-574. DOI: 10.3102/0034654307309920.
  • Shavelson, R. J., & Webb, N. M. (1991). Generalizability Theory: A primer. London: Sage Inc.
  • Stevens, D. D., & Levi, A. J. (2013). Introduction to Rubrics: An assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning. Virginia: Stylus Publishing, LLC.
  • Strijbos, J.-W., Narciss, S., & Dünnebier, K. (2010). Peer feedback content and sender’s competence level in academic writing revision tasks: Are they critical for feedback perceptions and efficiency? Learning and Instruction, 20(4), 291–303. DOI: 10.1016/j.learninstruc.2009.08.008.
  • Stuhlmann, J., Daniel, C., Dellinger, A., Denny, R.K., & Powers, T. (1999). A generalizability study of the effects of training on teachers’ abilities to rate children's writing using a rubric. Journal of Reading Psychology, 20, 107-127. DOI: 10.1080/027027199278439.
  • Şencan, H. (2005). Sosyal ve Davranışsal Ölçmelerde Güvenirlik ve Geçerlik. Ankara: Sözkesen Matbaacılık.
  • Tomlinson, P. (1998). Understanding Mentoring. Buckingham: Open University Press.
  • Topping, K. J. (2009). Peer assessment. Theory into practice, 48(1), 20-27. DOI: 10.1080/00405840802577569.
  • Topping, K. J., Smith, E. F., Swanson, I. & Elliot, A. (2000). Formative peer assessment of academic writing between postgraduate students. Assessment and Evaluation in Higher Education, 25, 149–169. DOI: 10.1080/713611428.
  • Van den Berg, I., Admiraal, W. & Pilot, A. (2006). Design principles and outcomes of peer assessment in higher education. Studies in Higher Education, 31, 341–356. DOI: 10.1080/03075070600680836.
  • Wang, Y., Li, H., Feng, Y., Jiang, Y. & Liu, Y. (2012). Assessment of programming language learning Based on Peer Code Review Model: Implementation and experience report. Computers & Education, 59(2), 412–422. DOI: 10.1016/j.compedu.2012.01.007.
  • Wanner, T., & Palmer, E. (2018). Formative self-and peer assessment for improved student learning: the crucial factors of design, teacher participation and feedback. Assessment and Evaluation in Higher Education, 43(7), 1032–1047. DOI: 10.1080/02602938.2018.1427698.
  • Zeng, L. M. (2020). Peer Review of teaching in higher education: A systematic review of its impact on the professional development of university teachers from the teaching expertise perspective. Educational Research Review, 31 (100333), 1-16. DOI: 10.1016/j.edurev.2020.100333.
  • Zhou, J., Zheng, Y., & Tai, J. H. M. (2020). Grudges and gratitude: The social-affective impacts of peer assessment. Assessment & Evaluation in Higher Education, 45(3), 345-358. DOI: 10.1080/02602938.2019.1643449.

Online peer assessment in teacher education

Yıl 2021, Cilt: 4 Sayı: 4 - ICETOL Special Issue, 835 - 853, 31.12.2021
https://doi.org/10.31681/jetol.987902

Öz

It has become necessary to monitor the change in learners' skills during the education carried out in electronic environments. In this study, pre-service teachers made a presentation in their teaching practice, and a formative assessment was given to ensure active participation of the observer pre-service teachers in the online peer assessment process. The observer pre-service teachers were asked to evaluate their peers' performances using a rubric. Based on the quantitative data collected and analyzed, questions about the experiences of the participants were created, and the opinions of the participants were obtained through e-mail. The research findings were obtained through the sequential explanatory mixed method. The study revealed that the observer pre-service teachers could evaluate different performances consistently. The research also showed that the validity of the assessments was significantly low especially in the evaluation of low and medium level performances. The qualitative findings confirmed the quantitative findings.

Kaynakça

  • Baartman, L. K., Bastiaens, T. J., Kirschner, P. A., & Van der Vleuten, C. P. (2007). Teachers’ opinions on quality criteria for Competency Assessment Programs. Teaching and Teacher Education, 23(6), 857-867. DOI: 10.1016/j.tate.2006.04.043.
  • Black, P. (1998). Testing: friend or foe? Theory and Practice of Asessment and Testing. Maidenhead, London: Falmer Press.
  • Black, P., Harrison, C., & Lee, C. (2003). Assessment for Learning: Putting It Into Practice. UK: Open University Press.
  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. DOI: 10.1191/1478088706qp063oa.
  • Brennan, R. L. (2001). Generalizability Theory. New York: Springer. DOI: 10.1007/978-1-4757-3456-0.
  • Brown, G., Glasswell, K., & Harland, D. (2004). Accuracy in scoring of writing: Studies of reliability and validity using a New Zealand writing assessment system. Assessing Writing, 9, 105–121. DOI: 10.1016/j.asw.2004.07.001.
  • Chang, C. C., Tseng, K. H., Chou, P. N. & Chen, Y. H. (2011). Reliability and validity of Web-based portfolio peer assessment: A case study for a senior high school’s students taking computer course. Computers and Education, 57, 1306–16. DOI: 10.1016/j.compedu.2011.01.014.
  • Cheng, K. H., Liang, J. C., & Tsai, C. C. (2015). Examining the role of feedback messages in undergraduate students’ writing performance during an online peer assessment activity. The Internet and Higher Education, 25, 78–84. DOI: 10.1016/j.iheduc.2015.02.001.
  • Cho, K., Schunn, C. D., & Wilson, R. W. (2006). Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives. Journal of Educational Psychology, 98(4), 891. DOI: 10.1037/0022-0663.98.4.891.
  • Creswell, J. W. (2014). A Concise Introduction to Mixed Methods Research. USA: Sage Publications Inc.
  • Creswell, J. W., & Clark, V. L. P. (2017). Designing and Conducting Mixed Methods Research. USA: Sage publications Inc.
  • Crocker, L., & Algina, J. (1986). Introduction to Classical and Modern Test Theory. USA: Thomson Learning.
  • Dunbar, S. B., Koretz, D. M., & Hoover, H. D. (1991). Quality control in the development and use of performance assessment. Applied Measurement in Education, 4, 289-303. DOI: 10.1207/s15324818ame0404_3.
  • Falchikov, N., & Boud, D. (1989). Student self-assessment in Higher Education: A meta-analysis. Review of Educational Research, 59(4), 395–430. DOI: 10.3102/00346543059004395.
  • Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in Higher Education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70, 287–322. DOI: 10.3102/00346543070003287.
  • Gearhart, M., Herman, J. L., Novak, J. R., & Wolf, S. A. (1995). Toward the instructional utility of large-scale writing assessment: Validation of a new narrative rubric. Assessing Writing, 2, 207-242. DOI: 10.1016/1075-2935(95)90013-6.
  • Iglesias Pérez, M. C., Vidal-Puga, J., & Pino Juste, M. R. (2020). The role of self and peer assessment in Higher Education. Studies in Higher Education, 1-10. DOI: 10.1080/03075079.2020.1783526.
  • James, N., & Busher, H. (2006). Credibility, authenticity and voice: Dilemmas in online interviewing. Qualitative Research, 6(3), 403-420. DOI: 10.1177/1468794106065010.
  • Jones, I., & Alcock, L. (2014). Peer assessment without assessment criteria. Studies in Higher Education, 39(10), 1774-1787. DOI: 10.1080/03075079.2013.821974.
  • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130-144. DOI: 10.1016/j.edurev.2007.05.002.
  • Lemlech, J. K. (1995). Becoming a Professional Leader. New York: Scholastic Inc.
  • Li, H., Xiong, Y., Hunter, C. V., Guo, X., & Tywoniw, R. (2020). Does peer assessment promote student learning? A meta-analysis. Assessment & Evaluation in Higher Education, 45(2), 193-211. DOI: 10.1080/02602938.2019.1620679.
  • Lu, J., & Law, N. (2012). Online peer assessment: Effects of cognitive and affective feedback. Instructional Science, 40(2), 257-275. DOI: 10.1007/s11251-011-9177-2.
  • Mann, C. & Stewart, F. (2000) Internet Communication and Qualitative Research. A Handbook for Research Online. London: Sage.
  • McConlogue, T. (2015). Making Judgements: Investigating the process of composing and receiving peer feedback. Studies in Higher Education, 40(9), 1495–1506. DOI: 10.1080/03075079.2013.868878.
  • Moskal, B. M., & Leydens, J. A. (2000). Scoring rubric development: Validity and reliability. Practical Assessment, Research & Evaluation, 7, 71-81. DOI: 10.7275/q7rm-gg74.
  • Nelson, M. M., & Schunn, C. D. (2009). The nature of feedback: How different types of peer feedback affect writing performance. Instructional Science, 37(4), 375–401. DOI: 10.1007/s11251-008-9053-x.
  • Nicol, D. J., & MacFarlane-Dick, D. (2006). Formative and self-regulated: a model and seven principles of good feedback practice. Studies in Higher Education, 31(20), 199-218. DOI: 10.1080/03075070600572090.
  • Ohaja, M., Dunlea, M., & Muldoon, K. (2013). Group marking and peer assessment during a group poster presentation: The experiences and views of midwifery students. Nurse Education in Practice, 13(5), 466-470. DOI: 10.1016/j.nepr.2012.11.005.
  • Orsmond, P., Merry, S., & Reiling, K. (1996). The importance of marking criteria in the use of peer assessment. Assessment and Evaluation in Higher Education, 21(3), 239–250. DOI: 10.1080/0260293960210304.
  • Popham, W. J. (1997). What’s wrong-and what’s right- with rubrics. Educational Leadership, 55(2), 72-75. Retrieved from http://www.ascd.org/publications/educational-leadership/oct97/vol55/num02/What’s-Wrong%E2%80%94and-What’s-Right%E2%80%94with-Rubrics.aspx.
  • Reinholz, D. (2016). The assessment cycle: A model for learning through peer assessment. Assessment & Evaluation in Higher Education, 41(2), 301-315. DOI: 10.1080/02602938.2015.1008982.
  • Reuse-Durham, N. (2005). Peer evaluation as an active learning technique. Journal of Instructional Psychology, 32(4), 328–345. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.522.838&rep=rep1&type=pdf.
  • Roscoe, R. D., & Chi, M. T. (2007). Understanding tutor learning: Knowledge-building and knowledge-telling in peer tutors’ explanations and questions. Review of Educational Research, 77(4), 534-574. DOI: 10.3102/0034654307309920.
  • Shavelson, R. J., & Webb, N. M. (1991). Generalizability Theory: A primer. London: Sage Inc.
  • Stevens, D. D., & Levi, A. J. (2013). Introduction to Rubrics: An assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning. Virginia: Stylus Publishing, LLC.
  • Strijbos, J.-W., Narciss, S., & Dünnebier, K. (2010). Peer feedback content and sender’s competence level in academic writing revision tasks: Are they critical for feedback perceptions and efficiency? Learning and Instruction, 20(4), 291–303. DOI: 10.1016/j.learninstruc.2009.08.008.
  • Stuhlmann, J., Daniel, C., Dellinger, A., Denny, R.K., & Powers, T. (1999). A generalizability study of the effects of training on teachers’ abilities to rate children's writing using a rubric. Journal of Reading Psychology, 20, 107-127. DOI: 10.1080/027027199278439.
  • Şencan, H. (2005). Sosyal ve Davranışsal Ölçmelerde Güvenirlik ve Geçerlik. Ankara: Sözkesen Matbaacılık.
  • Tomlinson, P. (1998). Understanding Mentoring. Buckingham: Open University Press.
  • Topping, K. J. (2009). Peer assessment. Theory into practice, 48(1), 20-27. DOI: 10.1080/00405840802577569.
  • Topping, K. J., Smith, E. F., Swanson, I. & Elliot, A. (2000). Formative peer assessment of academic writing between postgraduate students. Assessment and Evaluation in Higher Education, 25, 149–169. DOI: 10.1080/713611428.
  • Van den Berg, I., Admiraal, W. & Pilot, A. (2006). Design principles and outcomes of peer assessment in higher education. Studies in Higher Education, 31, 341–356. DOI: 10.1080/03075070600680836.
  • Wang, Y., Li, H., Feng, Y., Jiang, Y. & Liu, Y. (2012). Assessment of programming language learning Based on Peer Code Review Model: Implementation and experience report. Computers & Education, 59(2), 412–422. DOI: 10.1016/j.compedu.2012.01.007.
  • Wanner, T., & Palmer, E. (2018). Formative self-and peer assessment for improved student learning: the crucial factors of design, teacher participation and feedback. Assessment and Evaluation in Higher Education, 43(7), 1032–1047. DOI: 10.1080/02602938.2018.1427698.
  • Zeng, L. M. (2020). Peer Review of teaching in higher education: A systematic review of its impact on the professional development of university teachers from the teaching expertise perspective. Educational Research Review, 31 (100333), 1-16. DOI: 10.1016/j.edurev.2020.100333.
  • Zhou, J., Zheng, Y., & Tai, J. H. M. (2020). Grudges and gratitude: The social-affective impacts of peer assessment. Assessment & Evaluation in Higher Education, 45(3), 345-358. DOI: 10.1080/02602938.2019.1643449.
Toplam 47 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Eğitim Üzerine Çalışmalar
Bölüm Makaleler
Yazarlar

Fatma Betül Kurnaz Adıbatmaz 0000-0002-7042-2159

Yayımlanma Tarihi 31 Aralık 2021
Yayımlandığı Sayı Yıl 2021 Cilt: 4 Sayı: 4 - ICETOL Special Issue

Kaynak Göster

APA Kurnaz Adıbatmaz, F. B. (2021). Online peer assessment in teacher education. Journal of Educational Technology and Online Learning, 4(4), 835-853. https://doi.org/10.31681/jetol.987902


22029

JETOL is abstracted and indexed by ERIC - Education Resources Information Center.