YABANCI/İKİNCİ DİL OLARAK TÜRKÇE ÖĞRETİMİNDE YAZMA BECERİSİNİN DEĞERLENDİRİLMESİ İÇİN PUANLAMA ANAHTARI ÖNERİSİ: A1-A2 SEVİYESİ
Özet
Writing, one of the four basic language skills, is a complex and multifaceted process that reflects learners' grammar, vocabulary, and ability to express and organize their thoughts. Therefore, measuring and evaluating writing skills is crucial for objectively determining learners' developmental levels. Graded scoring rubrics are necessary for the objective and consistent evaluation of writing skills. These rubrics ensure objectivity in the evaluation process, allow for a detailed analysis of learners' writing performance across various dimensions, and provide comprehensive feedback to learners. Such feedback helps identify gaps in the teaching process, contributing to the improvement of learners' writing skills and enhancing the overall efficiency of the teaching process.
In this study, two different analytical graded scoring rubrics were developed to evaluate the writing performances of learners at A1 and A2 levels who are learning Turkish as a foreign or second language. Additionally, evaluator guides were prepared to standardize the evaluation process, clarify how evaluators should apply the rubrics and interpret the criteria, and ensure more objective and reliable results.
The study employed an exploratory sequential design, a mixed-methods approach, during the scoring rubric development process. In the qualitative phase, a literature review was conducted on writing skills and rubric development. Outcomes at the basic level as outlined in the Common European Framework of Reference for Languages (CEFR) and the Programme for Teaching Turkish as a Foreign Language (TYDÖP) were analyzed alongside those included in Turkish as a Foreign Language teaching sets. Furthermore, scoring rubrics used by The European Language Certificate (TELC), Cambridge, and Turkish Language Teaching Centers (TÖMER) were examined. Based on these reviews, an item pool was created, and dimensions and criteria were determined with input from instructors. Criterion descriptors were then developed accordingly.
To establish the content validity of the scoring rubrics, expert opinions were obtained from five instructors with at least five years of field experience using the Davis (1992) technique, and content validity indexes were calculated. The reliability of the analytical graded scoring rubrics was assessed using data from evaluations of six A1 and A2 level writing exam papers by twenty evaluators utilizing the rubrics and guides. The Kendall W coefficient of agreement was calculated based on the obtained data. The results revealed a high level of agreement in the evaluations conducted with the developed rubrics. Furthermore, analysis using the Kruskal-Wallis test showed no significant differences in evaluations based on the evaluators' experience levels.
These findings demonstrate that the developed graded scoring rubrics are valid, reliable, and practical tools for assessing writing skills in the field.