Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
Evaluating large language models in analysing classroom dialogue
oleh: Yun Long, Haifeng Luo, Yu Zhang
| Format: | Article |
|---|---|
| Diterbitkan: | Nature Portfolio 2024-10-01 |
Deskripsi
Abstract This study explores the use of Large Language Models (LLMs), specifically GPT-4, in analysing classroom dialogue—a key task for teaching diagnosis and quality improvement. Traditional qualitative methods are both knowledge- and labour-intensive. This research investigates the potential of LLMs to streamline and enhance this process. Using datasets from middle school mathematics and Chinese classes, classroom dialogues were manually coded by experts and then analysed with a customised GPT-4 model. The study compares manual annotations with GPT-4 outputs to evaluate efficacy. Metrics include time efficiency, inter-coder agreement, and reliability between human coders and GPT-4. Results show significant time savings and high coding consistency between the model and human coders, with minor discrepancies. These findings highlight the strong potential of LLMs in teaching evaluation and facilitation.