Human raters' perceptions of automated assessment of oral language skills, the digital assessment process and the dimensions of speech to be assessed

Human raters' perceptions of the automated assessment of oral language skills, the digital assessment process and the dimensions to be assessed from speaking performances

Authors

Keywords:

automated assessment, language assessment, oral language skills

Abstract

This study investigated human raters’ perceptions of automated assessment of oral language skills. The raters (n = 37) participated in three assessment experiments organized by the DigiTala research project using Moodle and Zoom. The raters assessed Finnish and Swedish learners’ speech samples using the criteria created by the project, which consisted of a holistic scale and five analytical scales. After the assessment, the raters responded to a questionnaire that included Likert-scale and open-ended questions. Numerical responses were analyzed by descriptive statistics, open responses by content analysis. The results will benefit those working on automated assessment and oral language assessment.