Skip to main content
eScholarship
Open Access Publications from the University of California

Insights from the first BabyLM Challenge: Training sample-efficient language models on a developmentally plausible corpus

Creative Commons 'BY' version 4.0 license
Abstract

Language models have great potential as cognitive models for studying human language acquisition, but current models are far less data-efficient than human learners. Children acquire language from 100 million words or less, but large language models are trained on trillions of words. We discuss the prospects for improving language models' developmental plausibility through a meta-analysis of results from the 2023 BabyLM Challenge. BabyLM was a competition that invited participants to train a language model on a 100 million-word corpus including transcribed speech and child-appropriate texts. Results from over 30 submissions showed that new machine learning techniques and increased training iterations yielded models that outperformed leading large language models in grammar, language understanding, and linguistic generalization, while cognitively plausible approaches such as curriculum learning were less effective. We discuss the implications of these and other findings for computational cognitive modeling and explore ideas to ensure future competitions' contributions to cognitive science.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View