Skip to main content
eScholarship
Open Access Publications from the University of California

Universal linguistic inductive biases via meta-learning

Creative Commons 'BY' version 4.0 license
Abstract

How do learners acquire languages from the limited data avail-able to them? This process must involve some inductivebiases—factors that affect how a learner generalizes—but it isunclear which inductive biases can explain observed patternsin language acquisition. To facilitate computational model-ing aimed at addressing this question, we introduce a frame-work for giving particular linguistic inductive biases to a neu-ral network model; such a model can then be used to em-pirically explore the effects of those inductive biases. Thisframework disentangles universal inductive biases, which areencoded in the initial values of a neural network’s param-eters, from non-universal factors, which the neural networkmust learn from data in a given language. The initial statethat encodes the inductive biases is found with meta-learning,a technique through which a model discovers how to acquirenew languages more easily via exposure to many possible lan-guages. By controlling the properties of the languages that areused during meta-learning, we can control the inductive biasesthat meta-learning imparts. We demonstrate this frameworkwith a case study based on syllable structure. First, we specifythe inductive biases that we intend to give our model, and thenwe translate those inductive biases into a space of languagesfrom which a model can meta-learn. Finally, using existinganalysis techniques, we verify that our approach has impartedthe linguistic inductive biases that it was intended to impart.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View