In recent decades there have been significant changes in the conceptualization of reading as well as in the perception of how this activity should be assessed. Interest in the analysis of reading processes has led to the emergence of new explanatory models based primarily on the contributions of cognitive psychology. In parallel, there have been notable advances in measurement procedures, especially in models based on Item Response Theory (IRT), as well as in the capacity and performance of specific software programs that allow data to be managed and analyzed. These changes have contributed significantly to the rise of testing procedures such as computerized adaptive tests (CATs), whose fundamental characteristic is that the sequence of items presented in the tests is adapted to the level of competence that the subject manifests. Likewise, the incorporation of elements of dynamic assessment (DA) as the prompts are gradually offered allows for obtaining information about the type and degree of support required to optimize the subjects performance. In this sense, the confluence of contributions from DA and CATs offers a new possibility for approaching the assessment of learning processes. In this article, we present a longitudinal research developed in two phases, through which a computerized dynamic adaptive assessment battery of reading processes (EDPL-BAI) was configured. The research frame involved 1,831 students (46% girls) from 13 public schools in three regions of Chile. The purpose of this study was to analyze the differential contribution on reading competence of dynamic scores obtained in a subsample composed of 324 (47% girls) students from third to sixth grade after the implementation of a set of adaptive dynamic tests of morpho-syntactic processes. The results achieved in the structural equation modeling indicate a good global fit. Individual relationships show a significant contribution of calibrated score that reflects estimated knowledge level on reading competence, as well as dynamic scores based on the assigned value of graduated prompts required by the students. These results showed significant predictive values on reading competence and incremental validity in relation to predictions made by static criterion tests.