- Main
Deep networks as cognitive models: the case of reading in different orthographies
Abstract
Although Artificial Neural Networks were born as Neurocognitive models, the architectures used nowadays in AI are not conceived as models of the brain. In the last few years, Deep networks have been fitted to brain activity to use them as neural models, but their use as cognitive models is less prevalent. Here we use a transformer model complemented with a simplified visual input to model reading acquisition. First, we trained the network to recognize the speech input. After that, we use letter sounds and letter visual representations to train the network to output the correct letters. We apply this model to our empirical previous results, comparing learning in a transparent (Spanish) and an opaque (French) orthography as in transparent orthographies, phonological awareness is much less important than in opaque orthographies as a predictor of reading. We show that the difficulty of training correlates with opaqueness, and interpret the results.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-