Skip to main content
eScholarship
Open Access Publications from the University of California

Can a Composite Metacognitive Judgment Accuracy Score Successfully CapturePerformance Variance during Multimedia Learning?

Creative Commons 'BY' version 4.0 license
Abstract

Theoretical models of self-regulated learning highlight theimportance and dynamic nature of metacognitive monitoringand regulation. However, traditional research typically has notexamined how different judgments, or the relative timing ofthose judgments, influence each other, especially in complexlearning environments. We compared six statistical modelsof performance of undergraduates (n = 55) learning inMetaTutor-IVH, a multimedia learning environment. Threetypes of prompted metacognitive judgments (ease of learning[EOL] judgments, content evaluations [CEs], and retrospectiveconfidence judgments [RCJs]) were used as individualpredictors, and combined in a uniformly-weighted compositescore and empirically based weighted composite score acrossthe learning session. The uniformly weighted composite scorebetter captured performance than the models using only anEOL judgment or RCJ judgment. However, the empiricallyweighted composite model outperformed all other models.Our results suggest that metacognitive judgments should notbe considered as independent phenomenon but as an intricateand interconnected process.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View