Kinship terminology varies cross-linguistically, but there are constraints on which kin may be categorised together. One proposed constraint on kinship diversity is internal co-selection: an evolutionary process where terminological changes in one generation of the kinship paradigm co-occur with parallel changes in other generations, increasing system-wide predictive structure. We compared kinship systems from 544 natural languages to simulated baselines and found higher-than-chance mutual information (MI) between generations of kin, suggesting a selective pressure for internal co-selection. We then tested experimentally whether this systematicity increases learnability. Participants were taught artificial kinship systems with either maximum or minimum MI between generations. We predicted the high-MI system would be easier to learn, but participants showed little evidence of learning in either condition. A follow-up experiment tested whether predictive structure facilitates generalisation rather than learning. Although other strategies are common, we found that participants often maximise predictive structure when generalising terms to new kin.