DEVELOPING A VALIDITY ARGUMENT FOR THE ENGLISH PLACEMENT TEST AT BTEC INTERNATIONAL COLLEGE DANANG CAMPUS

Foreign or second language writing is one of the important skills in languagelearning and teaching, and universities use scores from writing assessments to make decisions on placing students in language support courses. Therefore, in order for the inferences based on scores from the tests are valid,...

詳細記述

保存先:
書誌詳細
第一著者: Võ Thị Thu Hiền
その他の著者: TS. Võ Thanh Sơn Ca
フォーマット: luanvanthacsi
言語:English
出版事項: Trường Đại học Ngoại ngữ, Đại học Đà Nẵng 2024
主題:
オンライン・アクセス:https://data.ufl.udn.vn//handle/UFL/443
タグ: タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!
Thư viện lưu trữ: Trường Đại học Ngoại ngữ - Đại học Đà Nẵng
その他の書誌記述
要約:Foreign or second language writing is one of the important skills in languagelearning and teaching, and universities use scores from writing assessments to make decisions on placing students in language support courses. Therefore, in order for the inferences based on scores from the tests are valid, it is important to build avalidity argument for the test. This study built a validity argument for the English Placement Writing test (EPT W) at BTEC International College Danang Campus.Particularly, this study examined two inferences which are generalization and evaluation by investigating the extent to which tasks and raters attributed to test score variability, and how many raters and tasks are needed to get involved in assessment progress to obtain the test score dependability of at least .85, and by investigating the extent to which vocabulary distributions were different across proficiency levels of academic writing. To achieve the goals, the test score data from 21 students who took two writing tasks were analyzed using the Generalizability theory. Decision studies (D-studies) were employed to investigate the number of tasks and raters needed to obtain the dependability score of 0.85. The 42 written responses from 21 students were then analyzed to examine the vocabulary distributions across proficiency levels. The results suggested tasks were the main variance attributed to variability of test score, whereas raters contributed to the score variance in a more limited way. To obtain the dependability score of 0.85, the test should include 14 raters and 10 tasks or 10 raters and 12 tasks. In terms of vocabulary distributions, low level students produced less varied language than higher level students. The findings suggest that higher proficiency learners produce a wider range of word families than lower proficient counterparts.