We consider the process by which the syntactic parameters of human language are set. Previous work has shown that for natural languages there can be no instant "automatic" triggering of parameters because the trigger properties in natural languages are often deep properties, not recognizable without parsing the input sentence. There are parametric algorithms that learn by parsing, but they are inefficient because they do not respect the Parametric Principle, they evaluate millions of grammars, rather than establishing the values of a few dozen parameters. They do so because they cannot tell in advance which input sentences are pertinent to which parameters, and because they have no protection against misleaming due to parametric ambiguity of the input. There is one model that does implement the Parametric Principle. This is the Structural Triggers Learner (STL). For an STL, a parameter value and its trigger are one and the same thing; they are what we call a structural trigger or treelet (a subtree or in the limiting case a single feature). These structural triggers are made available by UG and adopted into the learner's grammar just in case they prove essential for parsing input sentences. This permits efficient recognition of the parameter values entailed by input sentences and allows the learner to avoid errors by discarding ambiguous input. However, the high degree of ambiguity inlierent in natural language impedes learning even for this efTicient system. An STL must wait a long time between unambiguous inputs. As we explain, this problem is particularly acute in the early stages of learning. In this paper we give a computational analysis of the performance of an STL. We then identify an important factor - the parametric expression rate - that holds promise of a solution to this early learning problem.