Compositional generalization that requires production and comprehension of novel _structures_ through observed constituent parts has been shown to be challenging for even very powerful neural network models of language. However, one of the test cases that poses the greatest difficulty---generalization of modifiers to unobserved syntactic positions---has not been empirically attested in human learners under the same exposure conditions assumed by these tests. In this work, we test adult human learners on whether they generalize or withhold the production of modification in novel syntactic positions using artificial language learning. We find that adult native speakers of English are biased towards producing modifiers in unobserved positions (therefore producing novel structures), even when they only observe modification in a single syntactic position, and even when the knowledge of their native language actively biases them against the plausibility of the target structures.