Skip to main content
eScholarship
Open Access Publications from the University of California

A metric of children’s inference-making difficulty during language comprehension

Creative Commons 'BY' version 4.0 license
Abstract

Reading comprehension research has identified sources of children’s difficulty with inference-making: lack of semantic/content knowledge and logical reasoning difficulty. NLP tools modeling semantic knowledge (e.g. BERT) can predict adult inference-making, but it is unclear whether they can predict children’s inference-making difficulty. In our ongoing study, we will examine whether our new inference difficulty metric can predict kindergarten students’ inference-making, using empirical data from a classroom intervention (ELCII). Students were given verbal information on a topic and multiple-choice questions, which require students to draw an inference from two given scaffolds. To develop this metric, we will train BERT on children’s books and ELCII content to compute an additive inference vector, the sum of the two vectorized scaffolds. The cosine distance between the additive and correct inferences may indicate inference difficulty. Results will indicate whether a probabilistic semantic space can model children’s inferences or if other components (e.g. logic) should be considered.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View