Sẵn sàng để xuất — 

ChatGPT: A reliable assistant for the evaluation of students’ written texts?

There is growing interest in the potential of Artificial Intelligence (AI) to assist in various educational tasks, including writing assessment. However, the comparative efficacy of human and AI-powered systems in this domain remains a subject of ongoing exploration. This study aimed to compare the...

Mô tả chi tiết

Lưu vào:
Hiển thị chi tiết
Tác giả chính: Atasoy, Arzu, Nezhad Arani, Saieed Moslemi
Định dạng: Sách
Ngôn ngữ:English
Nhà xuất bản: 2025
Chủ đề:
Truy cập trực tuyến:https://link.springer.com/article/10.1007/s10639-025-13553-1
https://dlib.phenikaa-uni.edu.vn/handle/PNK/11836
Từ khóa: Thêm từ khóa
Không có từ khóa, Hãy là người đầu tiên đánh dấu biểu ghi này!
id oai:localhost:PNK-11836
record_format dspace
spelling oai:localhost:PNK-118362025-04-27T03:01:45Z ChatGPT: A reliable assistant for the evaluation of students’ written texts? Atasoy, Arzu Nezhad Arani, Saieed Moslemi Assessing writing ChatGPT There is growing interest in the potential of Artificial Intelligence (AI) to assist in various educational tasks, including writing assessment. However, the comparative efficacy of human and AI-powered systems in this domain remains a subject of ongoing exploration. This study aimed to compare the accuracy of human raters (teachers and pre-service teachers) and AI systems (ChatGPT and trained ChatGPT) in classifying written texts. The study employed both chi-square tests and logistic regression analysis to examine the relationship between rater groups (human vs. machine) and the accuracy of text classification. Initial chi-square analyses suggested no significant differences in classification accuracy between human and AI raters. However, the logistic regression model revealed a significant relationship, with human raters demonstrating a higher rate of correct classification compared to their AI counterparts. The logistic model achieved an 81.3% success rate in predicting correct classifications. While AI systems show promise in automated text processing, human raters currently demonstrate superior accuracy in writing assessment tasks. These findings highlight the need for further research into the strengths and limitations of both human and AI-based approaches. The integration of AI in educational assessment should focus on complementing and supporting, rather than replacing, the expertise of human educators. 2025-04-27T03:01:45Z 2025-04-27T03:01:45Z 2025 Book https://link.springer.com/article/10.1007/s10639-025-13553-1 https://dlib.phenikaa-uni.edu.vn/handle/PNK/11836 en application/pdf
institution Digital Phenikaa
collection Digital Phenikaa
language English
topic Assessing writing
ChatGPT
spellingShingle Assessing writing
ChatGPT
Atasoy, Arzu
Nezhad Arani, Saieed Moslemi
ChatGPT: A reliable assistant for the evaluation of students’ written texts?
description There is growing interest in the potential of Artificial Intelligence (AI) to assist in various educational tasks, including writing assessment. However, the comparative efficacy of human and AI-powered systems in this domain remains a subject of ongoing exploration. This study aimed to compare the accuracy of human raters (teachers and pre-service teachers) and AI systems (ChatGPT and trained ChatGPT) in classifying written texts. The study employed both chi-square tests and logistic regression analysis to examine the relationship between rater groups (human vs. machine) and the accuracy of text classification. Initial chi-square analyses suggested no significant differences in classification accuracy between human and AI raters. However, the logistic regression model revealed a significant relationship, with human raters demonstrating a higher rate of correct classification compared to their AI counterparts. The logistic model achieved an 81.3% success rate in predicting correct classifications. While AI systems show promise in automated text processing, human raters currently demonstrate superior accuracy in writing assessment tasks. These findings highlight the need for further research into the strengths and limitations of both human and AI-based approaches. The integration of AI in educational assessment should focus on complementing and supporting, rather than replacing, the expertise of human educators.
format Book
author Atasoy, Arzu
Nezhad Arani, Saieed Moslemi
author_facet Atasoy, Arzu
Nezhad Arani, Saieed Moslemi
author_sort Atasoy, Arzu
title ChatGPT: A reliable assistant for the evaluation of students’ written texts?
title_short ChatGPT: A reliable assistant for the evaluation of students’ written texts?
title_full ChatGPT: A reliable assistant for the evaluation of students’ written texts?
title_fullStr ChatGPT: A reliable assistant for the evaluation of students’ written texts?
title_full_unstemmed ChatGPT: A reliable assistant for the evaluation of students’ written texts?
title_sort chatgpt: a reliable assistant for the evaluation of students’ written texts?
publishDate 2025
url https://link.springer.com/article/10.1007/s10639-025-13553-1
https://dlib.phenikaa-uni.edu.vn/handle/PNK/11836
_version_ 1830585004336349184
score 8.893527