Skip to main content
Open Access Publications from the University of California


UCLA Electronic Theses and Dissertations bannerUCLA

Slaying the Great Green Dragon: Learning and modelling iterable ordered optional adjuncts


Adjuncts and arguments exhibit different syntactic behaviours, but modelling this difference in minimalist syntax is challenging: on the one hand, adjuncts differ from arguments in that they are optional, transparent, and iterable, but on the other hand they are often strictly ordered, reflecting the kind of strict selection seen in argument application. The former properties mean the derivation proceeds the same way whether or not the adjuncts are present, but the latter means the derivation must know which adjuncts have already been adjoined, to avoid adjoining new ones out of order. This dissertation proposes a precise minimalist model of adjuncts that accounts for both behaviours.

The second half considers the learnability of two closely related properties of adjuncts: their optionality and iterability. Many formal learning models predict a relationship between optionality and iterability, and any learning model of human language needs to be able to generalise from limited to indefinite repetition, since many languages include such sentences as "I really really really ... really love linguistics". All of the formal models I examine make this generalisation. A study of people learning an artificial language indicates that people also make this generalisation.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View