Skip to main content
eScholarship
Open Access Publications from the University of California

Neural network modelling on Korean monolingual children's comprehension of suffixal passive construction in Korean

Creative Commons 'BY' version 4.0 license
Abstract

This study explores a GPT-2 architecture's capacity to capture monolingual children's comprehension behaviour in Korean, a language underexplored in this context. We examine its performance in processing a suffixal passive construction involving verbal morphology and the interpretive procedures driven by that morphology. Through model fine-tuning via patching and hyperparameter variations, we assess their classification accuracy on test items used in Shin (2022a). Results show discrepancies in simulating children's response patterns, highlighting the limitations of neural networks in capturing child language features. This prompts further investigation into computational models' capacity to elucidate developmental trajectories of child language that have been unveiled through corpus-based or experimental research.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View