- Main
Why is scaling up models of language evolution hard?
Abstract
Computational model simulations have been very fruitful for gaining insight into how the systematic structure we observe in the world's natural languages could have emerged through cultural evolution. However, these model simulations operate on a toy scale compared to the size of actual human vocabularies, due to the prohibitive computational resource demands that simulations with larger lexicons would pose. Using computational complexity analysis, we show that this is not an implementational artifact, but instead it reflects a deeper theoretical issue: these models are (in their current formulation) computationally intractable. This has important theoretical implications, because it means that there is no way of knowing whether or not the properties and regularities observed for the toy models would scale up. All is not lost however, because awareness of intractability allows us to face the issue of scaling head-on, and can guide the development of our theories.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-